From 2d32e129cad0a51eaca98733f6e82e5622948176 Mon Sep 17 00:00:00 2001 From: jay vyas Date: Fri, 29 Apr 2022 19:45:21 -0400 Subject: [PATCH] Conformance results for v1.22/VMWare Tanzu Kubernetes Grid 1.5.3 (#1941) --- .../vmware-tanzu-kubernetes-grid/PRODUCT.yaml | 8 + v1.22/vmware-tanzu-kubernetes-grid/README.md | 21 + v1.22/vmware-tanzu-kubernetes-grid/e2e.log | 14718 ++++++++++++ .../vmware-tanzu-kubernetes-grid/junit_01.xml | 18610 ++++++++++++++++ 4 files changed, 33357 insertions(+) create mode 100644 v1.22/vmware-tanzu-kubernetes-grid/PRODUCT.yaml create mode 100644 v1.22/vmware-tanzu-kubernetes-grid/README.md create mode 100644 v1.22/vmware-tanzu-kubernetes-grid/e2e.log create mode 100644 v1.22/vmware-tanzu-kubernetes-grid/junit_01.xml diff --git a/v1.22/vmware-tanzu-kubernetes-grid/PRODUCT.yaml b/v1.22/vmware-tanzu-kubernetes-grid/PRODUCT.yaml new file mode 100644 index 0000000000..55f4d60c3c --- /dev/null +++ b/v1.22/vmware-tanzu-kubernetes-grid/PRODUCT.yaml @@ -0,0 +1,8 @@ +vendor: VMware +name: VMware Tanzu Kubernetes Grid +version: 1.5.3 +website_url: https://tanzu.vmware.com/kubernetes-grid +documentation_url: https://docs.vmware.com +product_logo_url: https://landscape.cncf.io/logos/v-mware-tanzu-kubernetes-grid.svg +type: distribution +description: VMware Tanzu Kubernetes Grid is VMware’s Kubernetes distribution - built on open source technologies, packaged for enterprise adoption and supported 24x7 by VMware Global Support Services (GSS). diff --git a/v1.22/vmware-tanzu-kubernetes-grid/README.md b/v1.22/vmware-tanzu-kubernetes-grid/README.md new file mode 100644 index 0000000000..347184a2e8 --- /dev/null +++ b/v1.22/vmware-tanzu-kubernetes-grid/README.md @@ -0,0 +1,21 @@ +# To Reproduce: + +## Deploy Tanzu Kubernetes Grid environment + +Download the Tanzu CLI from vmware.com and set up a Tanzu Kubernetes Grid management cluster first: + +```console +$ tanzu management-cluster create --ui +``` + +Set up a management cluster with the `production` plan (3 control plane, 3 worker nodes), once management cluster is up and running, deploy a workload cluster to run the CNCF conformance suite: + +```console +$ tanzu cluster create tkg-conformance +``` + +Once the cluster is stood up, switch to the newly-created context with `kubectl config use-context ` + +## Deploy sonobuoy Conformance test + +Follow the conformance suite instructions to test it.⏎ diff --git a/v1.22/vmware-tanzu-kubernetes-grid/e2e.log b/v1.22/vmware-tanzu-kubernetes-grid/e2e.log new file mode 100644 index 0000000000..d398ccb256 --- /dev/null +++ b/v1.22/vmware-tanzu-kubernetes-grid/e2e.log @@ -0,0 +1,14718 @@ +I0429 18:13:58.989908 25 e2e.go:129] Starting e2e run "75e00e5f-c4ea-4979-9e44-b3957b24b942" on Ginkgo node 1 +{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1651256038 - Will randomize all specs +Will run 346 of 6433 specs + +Apr 29 18:14:01.508: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 18:14:01.526: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Apr 29 18:14:01.571: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Apr 29 18:14:01.639: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Apr 29 18:14:01.639: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready. +Apr 29 18:14:01.639: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Apr 29 18:14:01.666: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'antrea-agent' (0 seconds elapsed) +Apr 29 18:14:01.666: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Apr 29 18:14:01.666: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'vsphere-cloud-controller-manager' (0 seconds elapsed) +Apr 29 18:14:01.666: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'vsphere-csi-node' (0 seconds elapsed) +Apr 29 18:14:01.666: INFO: e2e test version: v1.22.4 +Apr 29 18:14:01.669: INFO: kube-apiserver version: v1.22.8+vmware.1 +Apr 29 18:14:01.670: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 18:14:01.789: INFO: Cluster IP family: ipv4 +SSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:14:01.789: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +W0429 18:14:01.945970 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Apr 29 18:14:01.947: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 18:14:01.971: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8de0fe84-077c-4db6-afed-3f050aedd8bf" in namespace "downward-api-225" to be "Succeeded or Failed" +Apr 29 18:14:01.977: INFO: Pod "downwardapi-volume-8de0fe84-077c-4db6-afed-3f050aedd8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298839ms +Apr 29 18:14:03.983: INFO: Pod "downwardapi-volume-8de0fe84-077c-4db6-afed-3f050aedd8bf": Phase="Running", Reason="", readiness=true. Elapsed: 2.012127331s +Apr 29 18:14:05.988: INFO: Pod "downwardapi-volume-8de0fe84-077c-4db6-afed-3f050aedd8bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017214169s +STEP: Saw pod success +Apr 29 18:14:05.988: INFO: Pod "downwardapi-volume-8de0fe84-077c-4db6-afed-3f050aedd8bf" satisfied condition "Succeeded or Failed" +Apr 29 18:14:05.992: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-8de0fe84-077c-4db6-afed-3f050aedd8bf container client-container: +STEP: delete the pod +Apr 29 18:14:06.028: INFO: Waiting for pod downwardapi-volume-8de0fe84-077c-4db6-afed-3f050aedd8bf to disappear +Apr 29 18:14:06.032: INFO: Pod downwardapi-volume-8de0fe84-077c-4db6-afed-3f050aedd8bf no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:14:06.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-225" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":1,"skipped":3,"failed":0} +SS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:14:06.044: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-f2d8ec41-091e-4138-8b5c-e025e2787b46 +STEP: Creating a pod to test consume secrets +Apr 29 18:14:06.807: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0dabd2f5-ab88-42ce-b4c8-a728e39532e2" in namespace "projected-893" to be "Succeeded or Failed" +Apr 29 18:14:06.812: INFO: Pod "pod-projected-secrets-0dabd2f5-ab88-42ce-b4c8-a728e39532e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.844506ms +Apr 29 18:14:08.818: INFO: Pod "pod-projected-secrets-0dabd2f5-ab88-42ce-b4c8-a728e39532e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01156182s +STEP: Saw pod success +Apr 29 18:14:08.818: INFO: Pod "pod-projected-secrets-0dabd2f5-ab88-42ce-b4c8-a728e39532e2" satisfied condition "Succeeded or Failed" +Apr 29 18:14:08.822: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-secrets-0dabd2f5-ab88-42ce-b4c8-a728e39532e2 container projected-secret-volume-test: +STEP: delete the pod +Apr 29 18:14:08.840: INFO: Waiting for pod pod-projected-secrets-0dabd2f5-ab88-42ce-b4c8-a728e39532e2 to disappear +Apr 29 18:14:08.844: INFO: Pod pod-projected-secrets-0dabd2f5-ab88-42ce-b4c8-a728e39532e2 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:14:08.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-893" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":2,"skipped":5,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:14:08.856: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Apr 29 18:14:48.956: INFO: The status of Pod kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf is Running (Ready = true) +Apr 29 18:14:49.183: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Apr 29 18:14:49.183: INFO: Deleting pod "simpletest.rc-5fxjn" in namespace "gc-5258" +Apr 29 18:14:49.194: INFO: Deleting pod "simpletest.rc-652jg" in namespace "gc-5258" +Apr 29 18:14:49.213: INFO: Deleting pod "simpletest.rc-6nfhq" in namespace "gc-5258" +Apr 29 18:14:49.225: INFO: Deleting pod "simpletest.rc-cd8h4" in namespace "gc-5258" +Apr 29 18:14:49.235: INFO: Deleting pod "simpletest.rc-cdt6v" in namespace "gc-5258" +Apr 29 18:14:49.251: INFO: Deleting pod "simpletest.rc-gmwxs" in namespace "gc-5258" +Apr 29 18:14:49.262: INFO: Deleting pod "simpletest.rc-k7tjt" in namespace "gc-5258" +Apr 29 18:14:49.276: INFO: Deleting pod "simpletest.rc-pzwf9" in namespace "gc-5258" +Apr 29 18:14:49.287: INFO: Deleting pod "simpletest.rc-wxr5k" in namespace "gc-5258" +Apr 29 18:14:49.304: INFO: Deleting pod "simpletest.rc-zx2x9" in namespace "gc-5258" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:14:49.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-5258" for this suite. + +• [SLOW TEST:40.474 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":3,"skipped":6,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:14:49.331: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename discovery +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:14:50.397: INFO: Checking APIGroup: apiregistration.k8s.io +Apr 29 18:14:50.399: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Apr 29 18:14:50.399: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Apr 29 18:14:50.399: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Apr 29 18:14:50.399: INFO: Checking APIGroup: apps +Apr 29 18:14:50.426: INFO: PreferredVersion.GroupVersion: apps/v1 +Apr 29 18:14:50.426: INFO: Versions found [{apps/v1 v1}] +Apr 29 18:14:50.426: INFO: apps/v1 matches apps/v1 +Apr 29 18:14:50.426: INFO: Checking APIGroup: events.k8s.io +Apr 29 18:14:50.431: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Apr 29 18:14:50.431: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Apr 29 18:14:50.431: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Apr 29 18:14:50.431: INFO: Checking APIGroup: authentication.k8s.io +Apr 29 18:14:50.433: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Apr 29 18:14:50.433: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Apr 29 18:14:50.433: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Apr 29 18:14:50.433: INFO: Checking APIGroup: authorization.k8s.io +Apr 29 18:14:50.436: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Apr 29 18:14:50.436: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Apr 29 18:14:50.436: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Apr 29 18:14:50.436: INFO: Checking APIGroup: autoscaling +Apr 29 18:14:50.438: INFO: PreferredVersion.GroupVersion: autoscaling/v1 +Apr 29 18:14:50.438: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Apr 29 18:14:50.438: INFO: autoscaling/v1 matches autoscaling/v1 +Apr 29 18:14:50.438: INFO: Checking APIGroup: batch +Apr 29 18:14:50.441: INFO: PreferredVersion.GroupVersion: batch/v1 +Apr 29 18:14:50.441: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] +Apr 29 18:14:50.441: INFO: batch/v1 matches batch/v1 +Apr 29 18:14:50.441: INFO: Checking APIGroup: certificates.k8s.io +Apr 29 18:14:50.443: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Apr 29 18:14:50.443: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Apr 29 18:14:50.443: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Apr 29 18:14:50.443: INFO: Checking APIGroup: networking.k8s.io +Apr 29 18:14:50.447: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Apr 29 18:14:50.447: INFO: Versions found [{networking.k8s.io/v1 v1}] +Apr 29 18:14:50.447: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Apr 29 18:14:50.447: INFO: Checking APIGroup: policy +Apr 29 18:14:50.450: INFO: PreferredVersion.GroupVersion: policy/v1 +Apr 29 18:14:50.450: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] +Apr 29 18:14:50.450: INFO: policy/v1 matches policy/v1 +Apr 29 18:14:50.450: INFO: Checking APIGroup: rbac.authorization.k8s.io +Apr 29 18:14:50.452: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Apr 29 18:14:50.452: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Apr 29 18:14:50.452: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Apr 29 18:14:50.452: INFO: Checking APIGroup: storage.k8s.io +Apr 29 18:14:50.454: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Apr 29 18:14:50.454: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Apr 29 18:14:50.454: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Apr 29 18:14:50.454: INFO: Checking APIGroup: admissionregistration.k8s.io +Apr 29 18:14:50.456: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Apr 29 18:14:50.456: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Apr 29 18:14:50.456: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Apr 29 18:14:50.456: INFO: Checking APIGroup: apiextensions.k8s.io +Apr 29 18:14:50.458: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Apr 29 18:14:50.458: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Apr 29 18:14:50.458: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Apr 29 18:14:50.458: INFO: Checking APIGroup: scheduling.k8s.io +Apr 29 18:14:50.461: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Apr 29 18:14:50.461: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Apr 29 18:14:50.461: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Apr 29 18:14:50.461: INFO: Checking APIGroup: coordination.k8s.io +Apr 29 18:14:50.463: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Apr 29 18:14:50.464: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Apr 29 18:14:50.464: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Apr 29 18:14:50.464: INFO: Checking APIGroup: node.k8s.io +Apr 29 18:14:50.466: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Apr 29 18:14:50.466: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] +Apr 29 18:14:50.466: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Apr 29 18:14:50.466: INFO: Checking APIGroup: discovery.k8s.io +Apr 29 18:14:50.468: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Apr 29 18:14:50.468: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] +Apr 29 18:14:50.468: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Apr 29 18:14:50.468: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Apr 29 18:14:50.470: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 +Apr 29 18:14:50.470: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Apr 29 18:14:50.470: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 +Apr 29 18:14:50.470: INFO: Checking APIGroup: acme.cert-manager.io +Apr 29 18:14:50.472: INFO: PreferredVersion.GroupVersion: acme.cert-manager.io/v1 +Apr 29 18:14:50.472: INFO: Versions found [{acme.cert-manager.io/v1 v1} {acme.cert-manager.io/v1beta1 v1beta1} {acme.cert-manager.io/v1alpha3 v1alpha3} {acme.cert-manager.io/v1alpha2 v1alpha2}] +Apr 29 18:14:50.472: INFO: acme.cert-manager.io/v1 matches acme.cert-manager.io/v1 +Apr 29 18:14:50.473: INFO: Checking APIGroup: cert-manager.io +Apr 29 18:14:50.476: INFO: PreferredVersion.GroupVersion: cert-manager.io/v1 +Apr 29 18:14:50.476: INFO: Versions found [{cert-manager.io/v1 v1} {cert-manager.io/v1beta1 v1beta1} {cert-manager.io/v1alpha3 v1alpha3} {cert-manager.io/v1alpha2 v1alpha2}] +Apr 29 18:14:50.476: INFO: cert-manager.io/v1 matches cert-manager.io/v1 +Apr 29 18:14:50.476: INFO: Checking APIGroup: ako.vmware.com +Apr 29 18:14:50.480: INFO: PreferredVersion.GroupVersion: ako.vmware.com/v1alpha1 +Apr 29 18:14:50.480: INFO: Versions found [{ako.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.480: INFO: ako.vmware.com/v1alpha1 matches ako.vmware.com/v1alpha1 +Apr 29 18:14:50.480: INFO: Checking APIGroup: cli.tanzu.vmware.com +Apr 29 18:14:50.482: INFO: PreferredVersion.GroupVersion: cli.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.482: INFO: Versions found [{cli.tanzu.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.482: INFO: cli.tanzu.vmware.com/v1alpha1 matches cli.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.482: INFO: Checking APIGroup: cns.vmware.com +Apr 29 18:14:50.487: INFO: PreferredVersion.GroupVersion: cns.vmware.com/v1alpha1 +Apr 29 18:14:50.487: INFO: Versions found [{cns.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.487: INFO: cns.vmware.com/v1alpha1 matches cns.vmware.com/v1alpha1 +Apr 29 18:14:50.487: INFO: Checking APIGroup: config.tanzu.vmware.com +Apr 29 18:14:50.490: INFO: PreferredVersion.GroupVersion: config.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.490: INFO: Versions found [{config.tanzu.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.490: INFO: config.tanzu.vmware.com/v1alpha1 matches config.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.490: INFO: Checking APIGroup: crd.antrea.io +Apr 29 18:14:50.491: INFO: PreferredVersion.GroupVersion: crd.antrea.io/v1beta1 +Apr 29 18:14:50.491: INFO: Versions found [{crd.antrea.io/v1beta1 v1beta1} {crd.antrea.io/v1alpha3 v1alpha3} {crd.antrea.io/v1alpha2 v1alpha2} {crd.antrea.io/v1alpha1 v1alpha1}] +Apr 29 18:14:50.491: INFO: crd.antrea.io/v1beta1 matches crd.antrea.io/v1beta1 +Apr 29 18:14:50.491: INFO: Checking APIGroup: crd.antrea.tanzu.vmware.com +Apr 29 18:14:50.493: INFO: PreferredVersion.GroupVersion: crd.antrea.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.493: INFO: Versions found [{crd.antrea.tanzu.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.493: INFO: crd.antrea.tanzu.vmware.com/v1alpha1 matches crd.antrea.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.493: INFO: Checking APIGroup: internal.packaging.carvel.dev +Apr 29 18:14:50.495: INFO: PreferredVersion.GroupVersion: internal.packaging.carvel.dev/v1alpha1 +Apr 29 18:14:50.495: INFO: Versions found [{internal.packaging.carvel.dev/v1alpha1 v1alpha1}] +Apr 29 18:14:50.495: INFO: internal.packaging.carvel.dev/v1alpha1 matches internal.packaging.carvel.dev/v1alpha1 +Apr 29 18:14:50.495: INFO: Checking APIGroup: kappctrl.k14s.io +Apr 29 18:14:50.496: INFO: PreferredVersion.GroupVersion: kappctrl.k14s.io/v1alpha1 +Apr 29 18:14:50.497: INFO: Versions found [{kappctrl.k14s.io/v1alpha1 v1alpha1}] +Apr 29 18:14:50.497: INFO: kappctrl.k14s.io/v1alpha1 matches kappctrl.k14s.io/v1alpha1 +Apr 29 18:14:50.497: INFO: Checking APIGroup: networking.tkg.tanzu.vmware.com +Apr 29 18:14:50.499: INFO: PreferredVersion.GroupVersion: networking.tkg.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.499: INFO: Versions found [{networking.tkg.tanzu.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.499: INFO: networking.tkg.tanzu.vmware.com/v1alpha1 matches networking.tkg.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.499: INFO: Checking APIGroup: networking.x-k8s.io +Apr 29 18:14:50.501: INFO: PreferredVersion.GroupVersion: networking.x-k8s.io/v1alpha1 +Apr 29 18:14:50.501: INFO: Versions found [{networking.x-k8s.io/v1alpha1 v1alpha1}] +Apr 29 18:14:50.501: INFO: networking.x-k8s.io/v1alpha1 matches networking.x-k8s.io/v1alpha1 +Apr 29 18:14:50.501: INFO: Checking APIGroup: ops.antrea.tanzu.vmware.com +Apr 29 18:14:50.502: INFO: PreferredVersion.GroupVersion: ops.antrea.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.502: INFO: Versions found [{ops.antrea.tanzu.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.503: INFO: ops.antrea.tanzu.vmware.com/v1alpha1 matches ops.antrea.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.503: INFO: Checking APIGroup: packaging.carvel.dev +Apr 29 18:14:50.505: INFO: PreferredVersion.GroupVersion: packaging.carvel.dev/v1alpha1 +Apr 29 18:14:50.505: INFO: Versions found [{packaging.carvel.dev/v1alpha1 v1alpha1}] +Apr 29 18:14:50.505: INFO: packaging.carvel.dev/v1alpha1 matches packaging.carvel.dev/v1alpha1 +Apr 29 18:14:50.505: INFO: Checking APIGroup: run.tanzu.vmware.com +Apr 29 18:14:50.506: INFO: PreferredVersion.GroupVersion: run.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.506: INFO: Versions found [{run.tanzu.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.506: INFO: run.tanzu.vmware.com/v1alpha1 matches run.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.506: INFO: Checking APIGroup: secretgen.carvel.dev +Apr 29 18:14:50.508: INFO: PreferredVersion.GroupVersion: secretgen.carvel.dev/v1alpha1 +Apr 29 18:14:50.508: INFO: Versions found [{secretgen.carvel.dev/v1alpha1 v1alpha1}] +Apr 29 18:14:50.508: INFO: secretgen.carvel.dev/v1alpha1 matches secretgen.carvel.dev/v1alpha1 +Apr 29 18:14:50.508: INFO: Checking APIGroup: secretgen.k14s.io +Apr 29 18:14:50.510: INFO: PreferredVersion.GroupVersion: secretgen.k14s.io/v1alpha1 +Apr 29 18:14:50.510: INFO: Versions found [{secretgen.k14s.io/v1alpha1 v1alpha1}] +Apr 29 18:14:50.510: INFO: secretgen.k14s.io/v1alpha1 matches secretgen.k14s.io/v1alpha1 +Apr 29 18:14:50.510: INFO: Checking APIGroup: security.antrea.tanzu.vmware.com +Apr 29 18:14:50.512: INFO: PreferredVersion.GroupVersion: security.antrea.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.512: INFO: Versions found [{security.antrea.tanzu.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.512: INFO: security.antrea.tanzu.vmware.com/v1alpha1 matches security.antrea.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.512: INFO: Checking APIGroup: core.antrea.tanzu.vmware.com +Apr 29 18:14:50.514: INFO: PreferredVersion.GroupVersion: core.antrea.tanzu.vmware.com/v1alpha2 +Apr 29 18:14:50.514: INFO: Versions found [{core.antrea.tanzu.vmware.com/v1alpha2 v1alpha2}] +Apr 29 18:14:50.514: INFO: core.antrea.tanzu.vmware.com/v1alpha2 matches core.antrea.tanzu.vmware.com/v1alpha2 +Apr 29 18:14:50.514: INFO: Checking APIGroup: addons.cluster.x-k8s.io +Apr 29 18:14:50.515: INFO: PreferredVersion.GroupVersion: addons.cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.515: INFO: Versions found [{addons.cluster.x-k8s.io/v1beta1 v1beta1} {addons.cluster.x-k8s.io/v1alpha4 v1alpha4} {addons.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Apr 29 18:14:50.515: INFO: addons.cluster.x-k8s.io/v1beta1 matches addons.cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.515: INFO: Checking APIGroup: bootstrap.cluster.x-k8s.io +Apr 29 18:14:50.518: INFO: PreferredVersion.GroupVersion: bootstrap.cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.518: INFO: Versions found [{bootstrap.cluster.x-k8s.io/v1beta1 v1beta1} {bootstrap.cluster.x-k8s.io/v1alpha4 v1alpha4} {bootstrap.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Apr 29 18:14:50.518: INFO: bootstrap.cluster.x-k8s.io/v1beta1 matches bootstrap.cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.518: INFO: Checking APIGroup: cluster.x-k8s.io +Apr 29 18:14:50.520: INFO: PreferredVersion.GroupVersion: cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.520: INFO: Versions found [{cluster.x-k8s.io/v1beta1 v1beta1} {cluster.x-k8s.io/v1alpha4 v1alpha4} {cluster.x-k8s.io/v1alpha3 v1alpha3}] +Apr 29 18:14:50.520: INFO: cluster.x-k8s.io/v1beta1 matches cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.520: INFO: Checking APIGroup: clusterctl.cluster.x-k8s.io +Apr 29 18:14:50.522: INFO: PreferredVersion.GroupVersion: clusterctl.cluster.x-k8s.io/v1alpha3 +Apr 29 18:14:50.522: INFO: Versions found [{clusterctl.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Apr 29 18:14:50.522: INFO: clusterctl.cluster.x-k8s.io/v1alpha3 matches clusterctl.cluster.x-k8s.io/v1alpha3 +Apr 29 18:14:50.522: INFO: Checking APIGroup: controlplane.cluster.x-k8s.io +Apr 29 18:14:50.524: INFO: PreferredVersion.GroupVersion: controlplane.cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.524: INFO: Versions found [{controlplane.cluster.x-k8s.io/v1beta1 v1beta1} {controlplane.cluster.x-k8s.io/v1alpha4 v1alpha4} {controlplane.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Apr 29 18:14:50.524: INFO: controlplane.cluster.x-k8s.io/v1beta1 matches controlplane.cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.524: INFO: Checking APIGroup: infrastructure.cluster.x-k8s.io +Apr 29 18:14:50.526: INFO: PreferredVersion.GroupVersion: infrastructure.cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.526: INFO: Versions found [{infrastructure.cluster.x-k8s.io/v1beta1 v1beta1} {infrastructure.cluster.x-k8s.io/v1alpha4 v1alpha4} {infrastructure.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Apr 29 18:14:50.526: INFO: infrastructure.cluster.x-k8s.io/v1beta1 matches infrastructure.cluster.x-k8s.io/v1beta1 +Apr 29 18:14:50.526: INFO: Checking APIGroup: clusterinformation.antrea.tanzu.vmware.com +Apr 29 18:14:50.530: INFO: PreferredVersion.GroupVersion: clusterinformation.antrea.tanzu.vmware.com/v1beta1 +Apr 29 18:14:50.530: INFO: Versions found [{clusterinformation.antrea.tanzu.vmware.com/v1beta1 v1beta1}] +Apr 29 18:14:50.530: INFO: clusterinformation.antrea.tanzu.vmware.com/v1beta1 matches clusterinformation.antrea.tanzu.vmware.com/v1beta1 +Apr 29 18:14:50.530: INFO: Checking APIGroup: data.packaging.carvel.dev +Apr 29 18:14:50.532: INFO: PreferredVersion.GroupVersion: data.packaging.carvel.dev/v1alpha1 +Apr 29 18:14:50.532: INFO: Versions found [{data.packaging.carvel.dev/v1alpha1 v1alpha1}] +Apr 29 18:14:50.532: INFO: data.packaging.carvel.dev/v1alpha1 matches data.packaging.carvel.dev/v1alpha1 +Apr 29 18:14:50.532: INFO: Checking APIGroup: stats.antrea.io +Apr 29 18:14:50.535: INFO: PreferredVersion.GroupVersion: stats.antrea.io/v1alpha1 +Apr 29 18:14:50.535: INFO: Versions found [{stats.antrea.io/v1alpha1 v1alpha1}] +Apr 29 18:14:50.535: INFO: stats.antrea.io/v1alpha1 matches stats.antrea.io/v1alpha1 +Apr 29 18:14:50.535: INFO: Checking APIGroup: stats.antrea.tanzu.vmware.com +Apr 29 18:14:50.537: INFO: PreferredVersion.GroupVersion: stats.antrea.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.537: INFO: Versions found [{stats.antrea.tanzu.vmware.com/v1alpha1 v1alpha1}] +Apr 29 18:14:50.537: INFO: stats.antrea.tanzu.vmware.com/v1alpha1 matches stats.antrea.tanzu.vmware.com/v1alpha1 +Apr 29 18:14:50.537: INFO: Checking APIGroup: controlplane.antrea.tanzu.vmware.com +Apr 29 18:14:50.538: INFO: PreferredVersion.GroupVersion: controlplane.antrea.tanzu.vmware.com/v1beta2 +Apr 29 18:14:50.538: INFO: Versions found [{controlplane.antrea.tanzu.vmware.com/v1beta2 v1beta2} {controlplane.antrea.tanzu.vmware.com/v1beta1 v1beta1}] +Apr 29 18:14:50.538: INFO: controlplane.antrea.tanzu.vmware.com/v1beta2 matches controlplane.antrea.tanzu.vmware.com/v1beta2 +Apr 29 18:14:50.538: INFO: Checking APIGroup: metrics.k8s.io +Apr 29 18:14:50.542: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Apr 29 18:14:50.542: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Apr 29 18:14:50.542: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +Apr 29 18:14:50.542: INFO: Checking APIGroup: system.antrea.io +Apr 29 18:14:50.543: INFO: PreferredVersion.GroupVersion: system.antrea.io/v1beta1 +Apr 29 18:14:50.543: INFO: Versions found [{system.antrea.io/v1beta1 v1beta1}] +Apr 29 18:14:50.543: INFO: system.antrea.io/v1beta1 matches system.antrea.io/v1beta1 +Apr 29 18:14:50.543: INFO: Checking APIGroup: system.antrea.tanzu.vmware.com +Apr 29 18:14:50.546: INFO: PreferredVersion.GroupVersion: system.antrea.tanzu.vmware.com/v1beta1 +Apr 29 18:14:50.546: INFO: Versions found [{system.antrea.tanzu.vmware.com/v1beta1 v1beta1}] +Apr 29 18:14:50.546: INFO: system.antrea.tanzu.vmware.com/v1beta1 matches system.antrea.tanzu.vmware.com/v1beta1 +Apr 29 18:14:50.546: INFO: Checking APIGroup: controlplane.antrea.io +Apr 29 18:14:50.585: INFO: PreferredVersion.GroupVersion: controlplane.antrea.io/v1beta2 +Apr 29 18:14:50.585: INFO: Versions found [{controlplane.antrea.io/v1beta2 v1beta2}] +Apr 29 18:14:50.585: INFO: controlplane.antrea.io/v1beta2 matches controlplane.antrea.io/v1beta2 +[AfterEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:14:50.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-1535" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":4,"skipped":46,"failed":0} +SSSSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:14:50.688: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-fe135d7e-1a90-4bd3-91b8-239e870c1f92 +STEP: Creating secret with name s-test-opt-upd-8e6874c8-21a5-49e9-ba9c-727477ef26ce +STEP: Creating the pod +Apr 29 18:14:50.762: INFO: The status of Pod pod-secrets-5bf9a405-b77a-4d1c-a8e6-331b502e23c4 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:14:52.768: INFO: The status of Pod pod-secrets-5bf9a405-b77a-4d1c-a8e6-331b502e23c4 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:14:54.768: INFO: The status of Pod pod-secrets-5bf9a405-b77a-4d1c-a8e6-331b502e23c4 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-fe135d7e-1a90-4bd3-91b8-239e870c1f92 +STEP: Updating secret s-test-opt-upd-8e6874c8-21a5-49e9-ba9c-727477ef26ce +STEP: Creating secret with name s-test-opt-create-2a5e7bed-f643-457b-a9b6-f59c763b87a4 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:16.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1448" for this suite. + +• [SLOW TEST:85.659 seconds] +[sig-storage] Secrets +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":5,"skipped":51,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:16.348: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:16:16.386: INFO: Creating pod... +Apr 29 18:16:16.424: INFO: Pod Quantity: 1 Status: Pending +Apr 29 18:16:17.431: INFO: Pod Quantity: 1 Status: Pending +Apr 29 18:16:18.430: INFO: Pod Status: Running +Apr 29 18:16:18.430: INFO: Creating service... +Apr 29 18:16:18.442: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/pods/agnhost/proxy/some/path/with/DELETE +Apr 29 18:16:18.453: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Apr 29 18:16:18.453: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/pods/agnhost/proxy/some/path/with/GET +Apr 29 18:16:18.459: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Apr 29 18:16:18.459: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/pods/agnhost/proxy/some/path/with/HEAD +Apr 29 18:16:18.469: INFO: http.Client request:HEAD | StatusCode:200 +Apr 29 18:16:18.469: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/pods/agnhost/proxy/some/path/with/OPTIONS +Apr 29 18:16:18.485: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Apr 29 18:16:18.485: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/pods/agnhost/proxy/some/path/with/PATCH +Apr 29 18:16:18.495: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Apr 29 18:16:18.495: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/pods/agnhost/proxy/some/path/with/POST +Apr 29 18:16:18.501: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Apr 29 18:16:18.501: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/pods/agnhost/proxy/some/path/with/PUT +Apr 29 18:16:18.506: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Apr 29 18:16:18.506: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/services/test-service/proxy/some/path/with/DELETE +Apr 29 18:16:18.517: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Apr 29 18:16:18.517: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/services/test-service/proxy/some/path/with/GET +Apr 29 18:16:18.522: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Apr 29 18:16:18.522: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/services/test-service/proxy/some/path/with/HEAD +Apr 29 18:16:18.528: INFO: http.Client request:HEAD | StatusCode:200 +Apr 29 18:16:18.528: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/services/test-service/proxy/some/path/with/OPTIONS +Apr 29 18:16:18.534: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Apr 29 18:16:18.534: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/services/test-service/proxy/some/path/with/PATCH +Apr 29 18:16:18.539: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Apr 29 18:16:18.539: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/services/test-service/proxy/some/path/with/POST +Apr 29 18:16:18.545: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Apr 29 18:16:18.545: INFO: Starting http.Client for https://100.64.0.1:443/api/v1/namespaces/proxy-2861/services/test-service/proxy/some/path/with/PUT +Apr 29 18:16:18.552: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:18.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-2861" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":6,"skipped":59,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:18.564: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Apr 29 18:16:18.620: INFO: Waiting up to 5m0s for pod "pod-05a2c4ae-4547-402c-a7bf-cd87e8c041d6" in namespace "emptydir-7572" to be "Succeeded or Failed" +Apr 29 18:16:18.625: INFO: Pod "pod-05a2c4ae-4547-402c-a7bf-cd87e8c041d6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.119687ms +Apr 29 18:16:20.634: INFO: Pod "pod-05a2c4ae-4547-402c-a7bf-cd87e8c041d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014177402s +Apr 29 18:16:22.644: INFO: Pod "pod-05a2c4ae-4547-402c-a7bf-cd87e8c041d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023647376s +STEP: Saw pod success +Apr 29 18:16:22.644: INFO: Pod "pod-05a2c4ae-4547-402c-a7bf-cd87e8c041d6" satisfied condition "Succeeded or Failed" +Apr 29 18:16:22.648: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-05a2c4ae-4547-402c-a7bf-cd87e8c041d6 container test-container: +STEP: delete the pod +Apr 29 18:16:22.690: INFO: Waiting for pod pod-05a2c4ae-4547-402c-a7bf-cd87e8c041d6 to disappear +Apr 29 18:16:22.694: INFO: Pod pod-05a2c4ae-4547-402c-a7bf-cd87e8c041d6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:22.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7572" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":7,"skipped":116,"failed":0} +SSSS +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:22.707: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/discovery.k8s.io +STEP: getting /apis/discovery.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Apr 29 18:16:22.810: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Apr 29 18:16:22.817: INFO: starting watch +STEP: patching +STEP: updating +Apr 29 18:16:22.840: INFO: waiting for watch events with expected annotations +Apr 29 18:16:22.840: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:22.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-3366" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":8,"skipped":120,"failed":0} + +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:22.877: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-6173 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-6173 +I0429 18:16:22.973690 25 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6173, replica count: 2 +I0429 18:16:26.025102 25 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 18:16:26.025: INFO: Creating new exec pod +Apr 29 18:16:29.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6173 exec execpodbwspn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Apr 29 18:16:29.815: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Apr 29 18:16:29.815: INFO: stdout: "externalname-service-j7ljj" +Apr 29 18:16:29.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6173 exec execpodbwspn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.174.219 80' +Apr 29 18:16:30.028: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.174.219 80\nConnection to 100.70.174.219 80 port [tcp/http] succeeded!\n" +Apr 29 18:16:30.028: INFO: stdout: "" +Apr 29 18:16:31.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6173 exec execpodbwspn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.174.219 80' +Apr 29 18:16:31.293: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.174.219 80\nConnection to 100.70.174.219 80 port [tcp/http] succeeded!\n" +Apr 29 18:16:31.293: INFO: stdout: "" +Apr 29 18:16:32.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6173 exec execpodbwspn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.70.174.219 80' +Apr 29 18:16:32.258: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.70.174.219 80\nConnection to 100.70.174.219 80 port [tcp/http] succeeded!\n" +Apr 29 18:16:32.258: INFO: stdout: "externalname-service-g86dm" +Apr 29 18:16:32.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6173 exec execpodbwspn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.111.35 30735' +Apr 29 18:16:32.474: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.111.35 30735\nConnection to 10.180.111.35 30735 port [tcp/*] succeeded!\n" +Apr 29 18:16:32.474: INFO: stdout: "" +Apr 29 18:16:33.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6173 exec execpodbwspn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.111.35 30735' +Apr 29 18:16:33.698: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.111.35 30735\nConnection to 10.180.111.35 30735 port [tcp/*] succeeded!\n" +Apr 29 18:16:33.698: INFO: stdout: "externalname-service-g86dm" +Apr 29 18:16:33.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6173 exec execpodbwspn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.99.66 30735' +Apr 29 18:16:33.926: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.99.66 30735\nConnection to 10.180.99.66 30735 port [tcp/*] succeeded!\n" +Apr 29 18:16:33.926: INFO: stdout: "externalname-service-j7ljj" +Apr 29 18:16:33.926: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:33.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6173" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:11.088 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":9,"skipped":120,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:33.966: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +STEP: reading a file in the container +Apr 29 18:16:36.550: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1143 pod-service-account-362f7923-f394-4112-98c6-9b031899ee71 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Apr 29 18:16:36.737: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1143 pod-service-account-362f7923-f394-4112-98c6-9b031899ee71 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Apr 29 18:16:36.942: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1143 pod-service-account-362f7923-f394-4112-98c6-9b031899ee71 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:37.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-1143" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":346,"completed":10,"skipped":146,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:37.153: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename podtemplate +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:37.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-4965" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":11,"skipped":160,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:37.256: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:37.297: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename disruption-2 +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: listing a collection of PDBs across all namespaces +STEP: listing a collection of PDBs in namespace disruption-8696 +STEP: deleting a collection of PDBs +STEP: Waiting for the PDB collection to be deleted +[AfterEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:41.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-7306" for this suite. +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:41.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-8696" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":12,"skipped":168,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:41.439: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sysctl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with one valid and two invalid sysctls +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:41.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-608" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":13,"skipped":203,"failed":0} +SSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:41.502: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Starting the proxy +Apr 29 18:16:41.535: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-8687 proxy --unix-socket=/tmp/kubectl-proxy-unix078463855/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:41.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8687" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":346,"completed":14,"skipped":209,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:41.600: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-4112/configmap-test-2396b4f8-6f4f-4ca4-88b1-a0b8fa6b4f85 +STEP: Creating a pod to test consume configMaps +Apr 29 18:16:41.649: INFO: Waiting up to 5m0s for pod "pod-configmaps-55cbb4ea-eb2d-449a-837c-c295420484eb" in namespace "configmap-4112" to be "Succeeded or Failed" +Apr 29 18:16:41.654: INFO: Pod "pod-configmaps-55cbb4ea-eb2d-449a-837c-c295420484eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.831532ms +Apr 29 18:16:43.660: INFO: Pod "pod-configmaps-55cbb4ea-eb2d-449a-837c-c295420484eb": Phase="Running", Reason="", readiness=true. Elapsed: 2.010864354s +Apr 29 18:16:45.666: INFO: Pod "pod-configmaps-55cbb4ea-eb2d-449a-837c-c295420484eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016234724s +STEP: Saw pod success +Apr 29 18:16:45.666: INFO: Pod "pod-configmaps-55cbb4ea-eb2d-449a-837c-c295420484eb" satisfied condition "Succeeded or Failed" +Apr 29 18:16:45.671: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-55cbb4ea-eb2d-449a-837c-c295420484eb container env-test: +STEP: delete the pod +Apr 29 18:16:45.689: INFO: Waiting for pod pod-configmaps-55cbb4ea-eb2d-449a-837c-c295420484eb to disappear +Apr 29 18:16:45.692: INFO: Pod pod-configmaps-55cbb4ea-eb2d-449a-837c-c295420484eb no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:16:45.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4112" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":15,"skipped":241,"failed":0} +SSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:16:45.703: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Apr 29 18:16:45.735: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Apr 29 18:16:45.744: INFO: Waiting for terminating namespaces to be deleted... +Apr 29 18:16:45.751: INFO: +Logging pods the apiserver thinks is on node tkg-mgmt-vc-control-plane-4czbf before test +Apr 29 18:16:45.765: INFO: ako-0 from avi-system started at 2022-04-29 17:51:41 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.765: INFO: Container ako-tkg-system-tkg-mgmt-vc ready: true, restart count 0 +Apr 29 18:16:45.765: INFO: capi-kubeadm-bootstrap-controller-manager-7ffb6dc8fc-8l5kl from capi-kubeadm-bootstrap-system started at 2022-04-29 01:35:11 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.765: INFO: Container manager ready: true, restart count 12 +Apr 29 18:16:45.765: INFO: capi-kubeadm-control-plane-controller-manager-667999fdb8-twv4s from capi-kubeadm-control-plane-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.765: INFO: Container manager ready: true, restart count 2 +Apr 29 18:16:45.765: INFO: capi-controller-manager-65c5769c4c-555gx from capi-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.765: INFO: Container manager ready: true, restart count 15 +Apr 29 18:16:45.765: INFO: capv-controller-manager-75bdbfb7dc-888vj from capv-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.765: INFO: Container manager ready: true, restart count 15 +Apr 29 18:16:45.765: INFO: cert-manager-cainjector-cc485fcdc-4qq4t from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.765: INFO: Container cert-manager ready: true, restart count 7 +Apr 29 18:16:45.765: INFO: cert-manager-d6b468546-pctjx from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.765: INFO: Container cert-manager ready: true, restart count 1 +Apr 29 18:16:45.765: INFO: cert-manager-webhook-dd697458d-c6xrg from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container cert-manager ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: antrea-agent-k79rx from kube-system started at 2022-04-28 17:17:44 +0000 UTC (2 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container antrea-agent ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: Container antrea-ovs ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: antrea-controller-f84fc8fd6-clc5q from kube-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container antrea-controller ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: coredns-67c8559bb6-7k2mz from kube-system started at 2022-04-28 17:12:06 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container coredns ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: coredns-67c8559bb6-bgthp from kube-system started at 2022-04-28 17:12:06 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container coredns ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: etcd-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container etcd ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: kube-apiserver-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container kube-apiserver ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container kube-controller-manager ready: true, restart count 18 +Apr 29 18:16:45.766: INFO: kube-proxy-2fvxm from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container kube-proxy ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: kube-scheduler-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-29 16:30:09 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container kube-scheduler ready: true, restart count 18 +Apr 29 18:16:45.766: INFO: metrics-server-58bbfb986f-7q897 from kube-system started at 2022-04-29 14:28:58 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container metrics-server ready: true, restart count 1 +Apr 29 18:16:45.766: INFO: vsphere-cloud-controller-manager-9gc8w from kube-system started at 2022-04-28 17:16:39 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 19 +Apr 29 18:16:45.766: INFO: vsphere-csi-controller-7d96796c4d-p276x from kube-system started at 2022-04-28 17:16:08 +0000 UTC (5 container statuses recorded) +Apr 29 18:16:45.766: INFO: Container csi-attacher ready: true, restart count 21 +Apr 29 18:16:45.766: INFO: Container csi-provisioner ready: true, restart count 22 +Apr 29 18:16:45.766: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 18:16:45.767: INFO: Container vsphere-csi-controller ready: true, restart count 2 +Apr 29 18:16:45.767: INFO: Container vsphere-syncer ready: true, restart count 18 +Apr 29 18:16:45.767: INFO: vsphere-csi-node-ld676 from kube-system started at 2022-04-28 17:16:08 +0000 UTC (3 container statuses recorded) +Apr 29 18:16:45.767: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 18:16:45.767: INFO: Container node-driver-registrar ready: true, restart count 2 +Apr 29 18:16:45.767: INFO: Container vsphere-csi-node ready: true, restart count 1 +Apr 29 18:16:45.767: INFO: sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2kxj9 from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 18:16:45.767: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 18:16:45.767: INFO: Container systemd-logs ready: false, restart count 0 +Apr 29 18:16:45.767: INFO: secretgen-controller-6dd9c95967-hfpnj from tanzu-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.767: INFO: Container secretgen-controller ready: true, restart count 0 +Apr 29 18:16:45.767: INFO: ako-operator-controller-manager-79cb9ccfc8-lwlw6 from tkg-system-networking started at 2022-04-29 17:51:37 +0000 UTC (2 container statuses recorded) +Apr 29 18:16:45.767: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Apr 29 18:16:45.767: INFO: Container manager ready: true, restart count 0 +Apr 29 18:16:45.767: INFO: kapp-controller-5b7d886dcc-rg8d8 from tkg-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.767: INFO: Container kapp-controller ready: true, restart count 1 +Apr 29 18:16:45.767: INFO: tanzu-addons-controller-manager-667d5c846f-f78n7 from tkg-system started at 2022-04-28 17:13:30 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.767: INFO: Container tanzu-addons-controller ready: true, restart count 1 +Apr 29 18:16:45.767: INFO: tanzu-capabilities-controller-manager-7864dcb4b7-9jhgh from tkg-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.767: INFO: Container manager ready: true, restart count 0 +Apr 29 18:16:45.767: INFO: tanzu-featuregates-controller-manager-fb8cf8ffc-qptgc from tkg-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.767: INFO: Container manager ready: true, restart count 0 +Apr 29 18:16:45.767: INFO: tkr-controller-manager-7c99874659-rqlgx from tkr-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.767: INFO: Container manager ready: true, restart count 1 +Apr 29 18:16:45.767: INFO: +Logging pods the apiserver thinks is on node tkg-mgmt-vc-md-0-59d8b7c778-msxpc before test +Apr 29 18:16:45.790: INFO: antrea-agent-jmd5f from kube-system started at 2022-04-28 17:17:22 +0000 UTC (2 container statuses recorded) +Apr 29 18:16:45.790: INFO: Container antrea-agent ready: true, restart count 1 +Apr 29 18:16:45.790: INFO: Container antrea-ovs ready: true, restart count 1 +Apr 29 18:16:45.790: INFO: kube-proxy-gqrhv from kube-system started at 2022-04-28 17:12:43 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.790: INFO: Container kube-proxy ready: true, restart count 1 +Apr 29 18:16:45.790: INFO: vsphere-csi-node-fxcc9 from kube-system started at 2022-04-28 17:16:08 +0000 UTC (3 container statuses recorded) +Apr 29 18:16:45.790: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 18:16:45.790: INFO: Container node-driver-registrar ready: true, restart count 4 +Apr 29 18:16:45.790: INFO: Container vsphere-csi-node ready: true, restart count 1 +Apr 29 18:16:45.790: INFO: sonobuoy from sonobuoy started at 2022-04-29 18:13:55 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.790: INFO: Container kube-sonobuoy ready: true, restart count 0 +Apr 29 18:16:45.790: INFO: sonobuoy-e2e-job-d928f42f9304448b from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 18:16:45.790: INFO: Container e2e ready: true, restart count 0 +Apr 29 18:16:45.790: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 18:16:45.790: INFO: sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2lph2 from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 18:16:45.790: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 18:16:45.790: INFO: Container systemd-logs ready: false, restart count 0 +Apr 29 18:16:45.790: INFO: tkg-telemetry-27520920--1-brvpx from tkg-system-telemetry started at 2022-04-29 18:00:00 +0000 UTC (1 container statuses recorded) +Apr 29 18:16:45.790: INFO: Container tkg-telemetry ready: false, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-c2729194-c552-4a6b-b9fe-7367dff8f994 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.180.99.66 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-c2729194-c552-4a6b-b9fe-7367dff8f994 off the node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +STEP: verifying the node doesn't have the label kubernetes.io/e2e-c2729194-c552-4a6b-b9fe-7367dff8f994 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:21:53.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-4349" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:308.232 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":16,"skipped":250,"failed":0} +SSSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:21:53.937: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:21:58.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-9853" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":17,"skipped":254,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:21:58.014: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:21:58.086: INFO: Create a RollingUpdate DaemonSet +Apr 29 18:21:58.095: INFO: Check that daemon pods launch on every node of the cluster +Apr 29 18:21:58.108: INFO: Number of nodes with available pods: 0 +Apr 29 18:21:58.108: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:21:59.121: INFO: Number of nodes with available pods: 0 +Apr 29 18:21:59.121: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:22:00.124: INFO: Number of nodes with available pods: 2 +Apr 29 18:22:00.124: INFO: Number of running nodes: 2, number of available pods: 2 +Apr 29 18:22:00.125: INFO: Update the DaemonSet to trigger a rollout +Apr 29 18:22:00.137: INFO: Updating DaemonSet daemon-set +Apr 29 18:22:03.165: INFO: Roll back the DaemonSet before rollout is complete +Apr 29 18:22:03.176: INFO: Updating DaemonSet daemon-set +Apr 29 18:22:03.176: INFO: Make sure DaemonSet rollback is complete +Apr 29 18:22:03.182: INFO: Wrong image for pod: daemon-set-gwk5c. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. +Apr 29 18:22:03.182: INFO: Pod daemon-set-gwk5c is not available +Apr 29 18:22:07.199: INFO: Pod daemon-set-4tp92 is not available +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6821, will wait for the garbage collector to delete the pods +Apr 29 18:22:07.284: INFO: Deleting DaemonSet.extensions daemon-set took: 6.817624ms +Apr 29 18:22:07.385: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.2937ms +Apr 29 18:22:09.992: INFO: Number of nodes with available pods: 0 +Apr 29 18:22:09.992: INFO: Number of running nodes: 0, number of available pods: 0 +Apr 29 18:22:09.998: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"714574"},"items":null} + +Apr 29 18:22:10.002: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"714574"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:10.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6821" for this suite. + +• [SLOW TEST:12.019 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":18,"skipped":285,"failed":0} +SS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:10.034: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:22:10.080: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8933cca0-d296-4711-9b06-dec58fac15c9" in namespace "security-context-test-1537" to be "Succeeded or Failed" +Apr 29 18:22:10.086: INFO: Pod "busybox-user-65534-8933cca0-d296-4711-9b06-dec58fac15c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537676ms +Apr 29 18:22:12.092: INFO: Pod "busybox-user-65534-8933cca0-d296-4711-9b06-dec58fac15c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012550812s +Apr 29 18:22:12.093: INFO: Pod "busybox-user-65534-8933cca0-d296-4711-9b06-dec58fac15c9" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:12.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-1537" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":19,"skipped":287,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:12.107: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-0fb01e5c-558d-429e-9d2c-a832c4ba9973 +STEP: Creating a pod to test consume secrets +Apr 29 18:22:12.176: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c77759ef-58ef-4dbe-be4d-ad3c8227b42b" in namespace "projected-5493" to be "Succeeded or Failed" +Apr 29 18:22:12.180: INFO: Pod "pod-projected-secrets-c77759ef-58ef-4dbe-be4d-ad3c8227b42b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212669ms +Apr 29 18:22:14.186: INFO: Pod "pod-projected-secrets-c77759ef-58ef-4dbe-be4d-ad3c8227b42b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009550897s +STEP: Saw pod success +Apr 29 18:22:14.186: INFO: Pod "pod-projected-secrets-c77759ef-58ef-4dbe-be4d-ad3c8227b42b" satisfied condition "Succeeded or Failed" +Apr 29 18:22:14.190: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-secrets-c77759ef-58ef-4dbe-be4d-ad3c8227b42b container projected-secret-volume-test: +STEP: delete the pod +Apr 29 18:22:14.230: INFO: Waiting for pod pod-projected-secrets-c77759ef-58ef-4dbe-be4d-ad3c8227b42b to disappear +Apr 29 18:22:14.234: INFO: Pod pod-projected-secrets-c77759ef-58ef-4dbe-be4d-ad3c8227b42b no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:14.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5493" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":20,"skipped":296,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:14.248: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:14.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-5269" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":21,"skipped":309,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:14.323: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +Apr 29 18:22:15.138: INFO: role binding webhook-auth-reader already exists +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 18:22:15.158: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 18:22:18.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:31.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3289" for this suite. +STEP: Destroying namespace "webhook-3289-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:17.085 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":22,"skipped":344,"failed":0} +S +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:31.409: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +Apr 29 18:22:31.468: INFO: Creating simple deployment test-deployment-rdt6r +Apr 29 18:22:31.508: INFO: deployment "test-deployment-rdt6r" doesn't have the required revision set +Apr 29 18:22:33.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786853351, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786853351, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786853351, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786853351, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-deployment-rdt6r-794dd694d8\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Getting /status +Apr 29 18:22:35.536: INFO: Deployment test-deployment-rdt6r has Conditions: [{Available True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rdt6r-794dd694d8" has successfully progressed.}] +STEP: updating Deployment Status +Apr 29 18:22:35.545: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786853354, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786853354, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786853354, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786853351, loc:(*time.Location)(0xa0a1d40)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-rdt6r-794dd694d8\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated +Apr 29 18:22:35.550: INFO: Observed &Deployment event: ADDED +Apr 29 18:22:35.551: INFO: Observed Deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rdt6r-794dd694d8"} +Apr 29 18:22:35.551: INFO: Observed &Deployment event: MODIFIED +Apr 29 18:22:35.551: INFO: Observed Deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rdt6r-794dd694d8"} +Apr 29 18:22:35.551: INFO: Observed Deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Apr 29 18:22:35.551: INFO: Observed &Deployment event: MODIFIED +Apr 29 18:22:35.551: INFO: Observed Deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Apr 29 18:22:35.551: INFO: Observed Deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-rdt6r-794dd694d8" is progressing.} +Apr 29 18:22:35.551: INFO: Observed &Deployment event: MODIFIED +Apr 29 18:22:35.552: INFO: Observed Deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Apr 29 18:22:35.552: INFO: Observed Deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rdt6r-794dd694d8" has successfully progressed.} +Apr 29 18:22:35.552: INFO: Observed &Deployment event: MODIFIED +Apr 29 18:22:35.552: INFO: Observed Deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Apr 29 18:22:35.552: INFO: Observed Deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rdt6r-794dd694d8" has successfully progressed.} +Apr 29 18:22:35.552: INFO: Found Deployment test-deployment-rdt6r in namespace deployment-1920 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Apr 29 18:22:35.552: INFO: Deployment test-deployment-rdt6r has an updated status +STEP: patching the Statefulset Status +Apr 29 18:22:35.552: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Apr 29 18:22:35.558: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched +Apr 29 18:22:35.561: INFO: Observed &Deployment event: ADDED +Apr 29 18:22:35.561: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rdt6r-794dd694d8"} +Apr 29 18:22:35.562: INFO: Observed &Deployment event: MODIFIED +Apr 29 18:22:35.562: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-rdt6r-794dd694d8"} +Apr 29 18:22:35.562: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Apr 29 18:22:35.562: INFO: Observed &Deployment event: MODIFIED +Apr 29 18:22:35.562: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Apr 29 18:22:35.562: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:31 +0000 UTC 2022-04-29 18:22:31 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-rdt6r-794dd694d8" is progressing.} +Apr 29 18:22:35.563: INFO: Observed &Deployment event: MODIFIED +Apr 29 18:22:35.563: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Apr 29 18:22:35.563: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rdt6r-794dd694d8" has successfully progressed.} +Apr 29 18:22:35.563: INFO: Observed &Deployment event: MODIFIED +Apr 29 18:22:35.563: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Apr 29 18:22:35.563: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-29 18:22:34 +0000 UTC 2022-04-29 18:22:31 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-rdt6r-794dd694d8" has successfully progressed.} +Apr 29 18:22:35.563: INFO: Observed deployment test-deployment-rdt6r in namespace deployment-1920 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Apr 29 18:22:35.564: INFO: Observed &Deployment event: MODIFIED +Apr 29 18:22:35.564: INFO: Found deployment test-deployment-rdt6r in namespace deployment-1920 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Apr 29 18:22:35.564: INFO: Deployment test-deployment-rdt6r has a patched status +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Apr 29 18:22:35.567: INFO: Deployment "test-deployment-rdt6r": +&Deployment{ObjectMeta:{test-deployment-rdt6r deployment-1920 a6d55116-e38b-4cd3-abeb-6a5f16227051 714964 1 2022-04-29 18:22:31 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-04-29 18:22:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2022-04-29 18:22:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2022-04-29 18:22:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0042a8b68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-rdt6r-794dd694d8",LastUpdateTime:2022-04-29 18:22:35 +0000 UTC,LastTransitionTime:2022-04-29 18:22:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Apr 29 18:22:35.573: INFO: New ReplicaSet "test-deployment-rdt6r-794dd694d8" of Deployment "test-deployment-rdt6r": +&ReplicaSet{ObjectMeta:{test-deployment-rdt6r-794dd694d8 deployment-1920 8d190616-7148-434d-bab3-08386d4163fc 714945 1 2022-04-29 18:22:31 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-rdt6r a6d55116-e38b-4cd3-abeb-6a5f16227051 0xc0042a8f50 0xc0042a8f51}] [] [{kube-controller-manager Update apps/v1 2022-04-29 18:22:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6d55116-e38b-4cd3-abeb-6a5f16227051\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:22:34 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 794dd694d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0042a8ff8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Apr 29 18:22:35.581: INFO: Pod "test-deployment-rdt6r-794dd694d8-5bgxt" is available: +&Pod{ObjectMeta:{test-deployment-rdt6r-794dd694d8-5bgxt test-deployment-rdt6r-794dd694d8- deployment-1920 956541f2-b0f6-4a0f-b720-cca1fb6e009c 714944 0 2022-04-29 18:22:31 +0000 UTC map[e2e:testing name:httpd pod-template-hash:794dd694d8] map[] [{apps/v1 ReplicaSet test-deployment-rdt6r-794dd694d8 8d190616-7148-434d-bab3-08386d4163fc 0xc0042a93a0 0xc0042a93a1}] [] [{kube-controller-manager Update v1 2022-04-29 18:22:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d190616-7148-434d-bab3-08386d4163fc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:22:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.102\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2nhxr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nhxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:22:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:22:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:22:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:22:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.102,StartTime:2022-04-29 18:22:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:22:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://210bdb88137d0bf537dc460eefe3503aa41a600e6d2128e4569b5b64b4e42cde,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:35.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1920" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":23,"skipped":345,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:35.591: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 18:22:36.389: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 18:22:39.415: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a mutating webhook configuration +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:39.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4893" for this suite. +STEP: Destroying namespace "webhook-4893-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":24,"skipped":357,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:39.556: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Apr 29 18:22:39.608: INFO: Waiting up to 5m0s for pod "pod-a1c0c4ab-1264-481a-ad57-c4993e180c4d" in namespace "emptydir-5980" to be "Succeeded or Failed" +Apr 29 18:22:39.611: INFO: Pod "pod-a1c0c4ab-1264-481a-ad57-c4993e180c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.10139ms +Apr 29 18:22:41.616: INFO: Pod "pod-a1c0c4ab-1264-481a-ad57-c4993e180c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008301773s +STEP: Saw pod success +Apr 29 18:22:41.616: INFO: Pod "pod-a1c0c4ab-1264-481a-ad57-c4993e180c4d" satisfied condition "Succeeded or Failed" +Apr 29 18:22:41.620: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-a1c0c4ab-1264-481a-ad57-c4993e180c4d container test-container: +STEP: delete the pod +Apr 29 18:22:41.638: INFO: Waiting for pod pod-a1c0c4ab-1264-481a-ad57-c4993e180c4d to disappear +Apr 29 18:22:41.641: INFO: Pod pod-a1c0c4ab-1264-481a-ad57-c4993e180c4d no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:41.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5980" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":25,"skipped":380,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:41.653: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 18:22:42.251: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 18:22:45.277: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:45.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1432" for this suite. +STEP: Destroying namespace "webhook-1432-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":26,"skipped":410,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:45.513: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename ingress +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Apr 29 18:22:45.590: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Apr 29 18:22:45.600: INFO: starting watch +STEP: patching +STEP: updating +Apr 29 18:22:45.617: INFO: waiting for watch events with expected annotations +Apr 29 18:22:45.617: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:45.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-5360" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":27,"skipped":424,"failed":0} + +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:45.669: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on node default medium +Apr 29 18:22:45.711: INFO: Waiting up to 5m0s for pod "pod-5b515fac-28a1-4477-b932-9fdc0d008367" in namespace "emptydir-5157" to be "Succeeded or Failed" +Apr 29 18:22:45.715: INFO: Pod "pod-5b515fac-28a1-4477-b932-9fdc0d008367": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151308ms +Apr 29 18:22:47.721: INFO: Pod "pod-5b515fac-28a1-4477-b932-9fdc0d008367": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009860463s +STEP: Saw pod success +Apr 29 18:22:47.721: INFO: Pod "pod-5b515fac-28a1-4477-b932-9fdc0d008367" satisfied condition "Succeeded or Failed" +Apr 29 18:22:47.725: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-5b515fac-28a1-4477-b932-9fdc0d008367 container test-container: +STEP: delete the pod +Apr 29 18:22:47.740: INFO: Waiting for pod pod-5b515fac-28a1-4477-b932-9fdc0d008367 to disappear +Apr 29 18:22:47.744: INFO: Pod pod-5b515fac-28a1-4477-b932-9fdc0d008367 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:22:47.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5157" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":28,"skipped":424,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:22:47.755: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: Ensuring more than one job is running at a time +STEP: Ensuring at least two running jobs exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:01.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-98" for this suite. + +• [SLOW TEST:74.078 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":29,"skipped":442,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:01.834: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Apr 29 18:24:01.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 create -f -' +Apr 29 18:24:03.244: INFO: stderr: "" +Apr 29 18:24:03.244: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Apr 29 18:24:03.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Apr 29 18:24:03.334: INFO: stderr: "" +Apr 29 18:24:03.334: INFO: stdout: "update-demo-nautilus-kdcmz update-demo-nautilus-zv5qb " +Apr 29 18:24:03.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 get pods update-demo-nautilus-kdcmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 18:24:03.412: INFO: stderr: "" +Apr 29 18:24:03.412: INFO: stdout: "" +Apr 29 18:24:03.412: INFO: update-demo-nautilus-kdcmz is created but not running +Apr 29 18:24:08.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Apr 29 18:24:08.515: INFO: stderr: "" +Apr 29 18:24:08.515: INFO: stdout: "update-demo-nautilus-kdcmz update-demo-nautilus-zv5qb " +Apr 29 18:24:08.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 get pods update-demo-nautilus-kdcmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 18:24:08.602: INFO: stderr: "" +Apr 29 18:24:08.602: INFO: stdout: "true" +Apr 29 18:24:08.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 get pods update-demo-nautilus-kdcmz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Apr 29 18:24:08.683: INFO: stderr: "" +Apr 29 18:24:08.683: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Apr 29 18:24:08.683: INFO: validating pod update-demo-nautilus-kdcmz +Apr 29 18:24:08.707: INFO: got data: { + "image": "nautilus.jpg" +} + +Apr 29 18:24:08.708: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Apr 29 18:24:08.708: INFO: update-demo-nautilus-kdcmz is verified up and running +Apr 29 18:24:08.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 get pods update-demo-nautilus-zv5qb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 18:24:08.795: INFO: stderr: "" +Apr 29 18:24:08.795: INFO: stdout: "true" +Apr 29 18:24:08.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 get pods update-demo-nautilus-zv5qb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Apr 29 18:24:08.874: INFO: stderr: "" +Apr 29 18:24:08.874: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Apr 29 18:24:08.874: INFO: validating pod update-demo-nautilus-zv5qb +Apr 29 18:24:08.887: INFO: got data: { + "image": "nautilus.jpg" +} + +Apr 29 18:24:08.887: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Apr 29 18:24:08.887: INFO: update-demo-nautilus-zv5qb is verified up and running +STEP: using delete to clean up resources +Apr 29 18:24:08.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 delete --grace-period=0 --force -f -' +Apr 29 18:24:08.972: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Apr 29 18:24:08.972: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Apr 29 18:24:08.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 get rc,svc -l name=update-demo --no-headers' +Apr 29 18:24:09.069: INFO: stderr: "No resources found in kubectl-9697 namespace.\n" +Apr 29 18:24:09.069: INFO: stdout: "" +Apr 29 18:24:09.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9697 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Apr 29 18:24:09.163: INFO: stderr: "" +Apr 29 18:24:09.163: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:09.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9697" for this suite. + +• [SLOW TEST:7.346 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":346,"completed":30,"skipped":500,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:09.183: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Apr 29 18:24:09.237: INFO: Waiting up to 5m0s for pod "downward-api-b3bc9642-b678-4b91-b429-d366015a1ad5" in namespace "downward-api-5744" to be "Succeeded or Failed" +Apr 29 18:24:09.242: INFO: Pod "downward-api-b3bc9642-b678-4b91-b429-d366015a1ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287746ms +Apr 29 18:24:11.248: INFO: Pod "downward-api-b3bc9642-b678-4b91-b429-d366015a1ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011025868s +Apr 29 18:24:13.256: INFO: Pod "downward-api-b3bc9642-b678-4b91-b429-d366015a1ad5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018126412s +STEP: Saw pod success +Apr 29 18:24:13.256: INFO: Pod "downward-api-b3bc9642-b678-4b91-b429-d366015a1ad5" satisfied condition "Succeeded or Failed" +Apr 29 18:24:13.259: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downward-api-b3bc9642-b678-4b91-b429-d366015a1ad5 container dapi-container: +STEP: delete the pod +Apr 29 18:24:13.278: INFO: Waiting for pod downward-api-b3bc9642-b678-4b91-b429-d366015a1ad5 to disappear +Apr 29 18:24:13.281: INFO: Pod downward-api-b3bc9642-b678-4b91-b429-d366015a1ad5 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:13.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5744" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":31,"skipped":526,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:13.292: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:13.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6725" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":32,"skipped":535,"failed":0} +SSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:13.367: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename limitrange +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Apr 29 18:24:13.416: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Apr 29 18:24:13.423: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Apr 29 18:24:13.423: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Apr 29 18:24:13.433: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Apr 29 18:24:13.433: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Apr 29 18:24:13.451: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Apr 29 18:24:13.451: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Apr 29 18:24:20.493: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:20.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-7820" for this suite. + +• [SLOW TEST:7.143 seconds] +[sig-scheduling] LimitRange +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":33,"skipped":538,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:20.511: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 18:24:20.910: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 18:24:23.938: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:23.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6615" for this suite. +STEP: Destroying namespace "webhook-6615-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":34,"skipped":546,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:24.041: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-9e524fcd-9415-4305-9d29-3ff2ed18ddbd +STEP: Creating a pod to test consume configMaps +Apr 29 18:24:24.089: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-618e05e2-3bef-46df-ad76-341aa3984dc0" in namespace "projected-2508" to be "Succeeded or Failed" +Apr 29 18:24:24.093: INFO: Pod "pod-projected-configmaps-618e05e2-3bef-46df-ad76-341aa3984dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.694599ms +Apr 29 18:24:26.101: INFO: Pod "pod-projected-configmaps-618e05e2-3bef-46df-ad76-341aa3984dc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011031642s +STEP: Saw pod success +Apr 29 18:24:26.101: INFO: Pod "pod-projected-configmaps-618e05e2-3bef-46df-ad76-341aa3984dc0" satisfied condition "Succeeded or Failed" +Apr 29 18:24:26.105: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-configmaps-618e05e2-3bef-46df-ad76-341aa3984dc0 container projected-configmap-volume-test: +STEP: delete the pod +Apr 29 18:24:26.124: INFO: Waiting for pod pod-projected-configmaps-618e05e2-3bef-46df-ad76-341aa3984dc0 to disappear +Apr 29 18:24:26.127: INFO: Pod pod-projected-configmaps-618e05e2-3bef-46df-ad76-341aa3984dc0 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:26.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2508" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":35,"skipped":554,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:26.140: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Apr 29 18:24:28.216: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:28.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9957" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":36,"skipped":583,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:28.239: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-ls4f +STEP: Creating a pod to test atomic-volume-subpath +Apr 29 18:24:28.290: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ls4f" in namespace "subpath-6127" to be "Succeeded or Failed" +Apr 29 18:24:28.295: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.097462ms +Apr 29 18:24:30.301: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 2.01141692s +Apr 29 18:24:32.306: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 4.015643048s +Apr 29 18:24:34.312: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 6.021750359s +Apr 29 18:24:36.317: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 8.027493908s +Apr 29 18:24:38.322: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 10.032503296s +Apr 29 18:24:40.333: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 12.042785564s +Apr 29 18:24:42.340: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 14.05010113s +Apr 29 18:24:44.346: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 16.056040883s +Apr 29 18:24:46.354: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 18.063958045s +Apr 29 18:24:48.362: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 20.071962164s +Apr 29 18:24:50.372: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Running", Reason="", readiness=true. Elapsed: 22.082511575s +Apr 29 18:24:52.380: INFO: Pod "pod-subpath-test-configmap-ls4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.090494191s +STEP: Saw pod success +Apr 29 18:24:52.380: INFO: Pod "pod-subpath-test-configmap-ls4f" satisfied condition "Succeeded or Failed" +Apr 29 18:24:52.386: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-subpath-test-configmap-ls4f container test-container-subpath-configmap-ls4f: +STEP: delete the pod +Apr 29 18:24:52.409: INFO: Waiting for pod pod-subpath-test-configmap-ls4f to disappear +Apr 29 18:24:52.413: INFO: Pod pod-subpath-test-configmap-ls4f no longer exists +STEP: Deleting pod pod-subpath-test-configmap-ls4f +Apr 29 18:24:52.414: INFO: Deleting pod "pod-subpath-test-configmap-ls4f" in namespace "subpath-6127" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:52.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-6127" for this suite. + +• [SLOW TEST:24.198 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":37,"skipped":594,"failed":0} +SSSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:52.441: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename podtemplate +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pod templates +Apr 29 18:24:52.507: INFO: created test-podtemplate-1 +Apr 29 18:24:52.516: INFO: created test-podtemplate-2 +Apr 29 18:24:52.522: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Apr 29 18:24:52.526: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Apr 29 18:24:52.543: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:24:52.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-4625" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":38,"skipped":598,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:24:52.561: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:25:06.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-6377" for this suite. +STEP: Destroying namespace "nsdeletetest-5317" for this suite. +Apr 29 18:25:06.764: INFO: Namespace nsdeletetest-5317 was already deleted +STEP: Destroying namespace "nsdeletetest-8358" for this suite. + +• [SLOW TEST:14.213 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":39,"skipped":621,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:25:06.774: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9815 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9815;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9815 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9815;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9815.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9815.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9815.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9815.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9815.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9815.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9815.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9815.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9815.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9815.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 212.187.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.187.212_udp@PTR;check="$$(dig +tcp +noall +answer +search 212.187.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.187.212_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9815 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9815;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9815 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9815;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9815.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9815.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9815.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9815.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9815.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9815.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9815.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9815.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9815.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9815.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9815.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 212.187.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.187.212_udp@PTR;check="$$(dig +tcp +noall +answer +search 212.187.71.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.71.187.212_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Apr 29 18:25:10.895: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.901: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.906: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.912: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.917: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.923: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.928: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.934: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.978: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.983: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.989: INFO: Unable to read jessie_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:10.995: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:11.000: INFO: Unable to read jessie_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:11.006: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:11.012: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:11.018: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:11.056: INFO: Lookups using dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9815 wheezy_tcp@dns-test-service.dns-9815 wheezy_udp@dns-test-service.dns-9815.svc wheezy_tcp@dns-test-service.dns-9815.svc wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9815 jessie_tcp@dns-test-service.dns-9815 jessie_udp@dns-test-service.dns-9815.svc jessie_tcp@dns-test-service.dns-9815.svc jessie_udp@_http._tcp.dns-test-service.dns-9815.svc jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc] + +Apr 29 18:25:16.080: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.086: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.092: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.097: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.102: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.108: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.113: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.119: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.182: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.188: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.194: INFO: Unable to read jessie_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.202: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.210: INFO: Unable to read jessie_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.216: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.222: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.229: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:16.269: INFO: Lookups using dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9815 wheezy_tcp@dns-test-service.dns-9815 wheezy_udp@dns-test-service.dns-9815.svc wheezy_tcp@dns-test-service.dns-9815.svc wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9815 jessie_tcp@dns-test-service.dns-9815 jessie_udp@dns-test-service.dns-9815.svc jessie_tcp@dns-test-service.dns-9815.svc jessie_udp@_http._tcp.dns-test-service.dns-9815.svc jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc] + +Apr 29 18:25:21.064: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.070: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.076: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.082: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.089: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.094: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.100: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.104: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.150: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.159: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.166: INFO: Unable to read jessie_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.173: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.179: INFO: Unable to read jessie_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.186: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.193: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.201: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:21.234: INFO: Lookups using dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9815 wheezy_tcp@dns-test-service.dns-9815 wheezy_udp@dns-test-service.dns-9815.svc wheezy_tcp@dns-test-service.dns-9815.svc wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9815 jessie_tcp@dns-test-service.dns-9815 jessie_udp@dns-test-service.dns-9815.svc jessie_tcp@dns-test-service.dns-9815.svc jessie_udp@_http._tcp.dns-test-service.dns-9815.svc jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc] + +Apr 29 18:25:26.065: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.072: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.079: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.085: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.091: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.097: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.103: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.108: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.153: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.160: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.165: INFO: Unable to read jessie_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.171: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.177: INFO: Unable to read jessie_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.182: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.187: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.192: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:26.249: INFO: Lookups using dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9815 wheezy_tcp@dns-test-service.dns-9815 wheezy_udp@dns-test-service.dns-9815.svc wheezy_tcp@dns-test-service.dns-9815.svc wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9815 jessie_tcp@dns-test-service.dns-9815 jessie_udp@dns-test-service.dns-9815.svc jessie_tcp@dns-test-service.dns-9815.svc jessie_udp@_http._tcp.dns-test-service.dns-9815.svc jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc] + +Apr 29 18:25:31.066: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.071: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.076: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.082: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.087: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.093: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.098: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.105: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.147: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.152: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.158: INFO: Unable to read jessie_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.164: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.170: INFO: Unable to read jessie_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.175: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.180: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.186: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:31.227: INFO: Lookups using dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9815 wheezy_tcp@dns-test-service.dns-9815 wheezy_udp@dns-test-service.dns-9815.svc wheezy_tcp@dns-test-service.dns-9815.svc wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9815 jessie_tcp@dns-test-service.dns-9815 jessie_udp@dns-test-service.dns-9815.svc jessie_tcp@dns-test-service.dns-9815.svc jessie_udp@_http._tcp.dns-test-service.dns-9815.svc jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc] + +Apr 29 18:25:36.066: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.073: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.082: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.089: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.104: INFO: Unable to read wheezy_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.153: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.215: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.223: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.228: INFO: Unable to read jessie_udp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.234: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815 from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.239: INFO: Unable to read jessie_udp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.244: INFO: Unable to read jessie_tcp@dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.249: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.254: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc from pod dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5: the server could not find the requested resource (get pods dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5) +Apr 29 18:25:36.297: INFO: Lookups using dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9815 wheezy_tcp@dns-test-service.dns-9815 wheezy_udp@dns-test-service.dns-9815.svc wheezy_tcp@dns-test-service.dns-9815.svc wheezy_udp@_http._tcp.dns-test-service.dns-9815.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9815.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9815 jessie_tcp@dns-test-service.dns-9815 jessie_udp@dns-test-service.dns-9815.svc jessie_tcp@dns-test-service.dns-9815.svc jessie_udp@_http._tcp.dns-test-service.dns-9815.svc jessie_tcp@_http._tcp.dns-test-service.dns-9815.svc] + +Apr 29 18:25:41.253: INFO: DNS probes using dns-9815/dns-test-ab0a3d8f-b7d4-4956-802f-4ccaca21fad5 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:25:41.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9815" for this suite. + +• [SLOW TEST:34.598 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":40,"skipped":671,"failed":0} +SSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:25:41.373: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Apr 29 18:25:41.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-2743 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' +Apr 29 18:25:41.511: INFO: stderr: "" +Apr 29 18:25:41.511: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 +Apr 29 18:25:41.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-2743 delete pods e2e-test-httpd-pod' +Apr 29 18:25:52.811: INFO: stderr: "" +Apr 29 18:25:52.811: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:25:52.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2743" for this suite. + +• [SLOW TEST:11.450 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":346,"completed":41,"skipped":677,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:25:52.823: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting a starting resourceVersion +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:25:57.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8762" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":42,"skipped":680,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:25:57.535: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:26:04.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2484" for this suite. + +• [SLOW TEST:7.069 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":43,"skipped":726,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:26:04.605: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a suspended cronjob +STEP: Ensuring no jobs are scheduled +STEP: Ensuring no job exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:31:04.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-9720" for this suite. + +• [SLOW TEST:300.085 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":44,"skipped":760,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:31:04.692: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:31:04.749: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Apr 29 18:31:14.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-8772 --namespace=crd-publish-openapi-8772 create -f -' +Apr 29 18:31:17.708: INFO: stderr: "" +Apr 29 18:31:17.708: INFO: stdout: "e2e-test-crd-publish-openapi-7847-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Apr 29 18:31:17.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-8772 --namespace=crd-publish-openapi-8772 delete e2e-test-crd-publish-openapi-7847-crds test-cr' +Apr 29 18:31:17.802: INFO: stderr: "" +Apr 29 18:31:17.802: INFO: stdout: "e2e-test-crd-publish-openapi-7847-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Apr 29 18:31:17.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-8772 --namespace=crd-publish-openapi-8772 apply -f -' +Apr 29 18:31:18.493: INFO: stderr: "" +Apr 29 18:31:18.493: INFO: stdout: "e2e-test-crd-publish-openapi-7847-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Apr 29 18:31:18.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-8772 --namespace=crd-publish-openapi-8772 delete e2e-test-crd-publish-openapi-7847-crds test-cr' +Apr 29 18:31:18.578: INFO: stderr: "" +Apr 29 18:31:18.578: INFO: stdout: "e2e-test-crd-publish-openapi-7847-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Apr 29 18:31:18.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-8772 explain e2e-test-crd-publish-openapi-7847-crds' +Apr 29 18:31:19.024: INFO: stderr: "" +Apr 29 18:31:19.024: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7847-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:31:27.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-8772" for this suite. + +• [SLOW TEST:22.441 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":45,"skipped":791,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:31:27.135: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Apr 29 18:31:27.194: INFO: Pod name pod-release: Found 0 pods out of 1 +Apr 29 18:31:32.201: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:31:33.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-2641" for this suite. + +• [SLOW TEST:6.113 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":46,"skipped":811,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:31:33.250: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1388.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1388.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Apr 29 18:31:39.384: INFO: DNS probes using dns-1388/dns-test-17b0f122-0eb8-4979-9f5d-921fd40b9213 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:31:39.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-1388" for this suite. + +• [SLOW TEST:6.169 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":346,"completed":47,"skipped":826,"failed":0} +SSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:31:39.419: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-a4683ab8-6e88-4292-b979-2cdfa3980751 +STEP: Creating a pod to test consume secrets +Apr 29 18:31:39.488: INFO: Waiting up to 5m0s for pod "pod-secrets-3eb90b2a-6081-4d03-acdc-fdc062fadaea" in namespace "secrets-2570" to be "Succeeded or Failed" +Apr 29 18:31:39.495: INFO: Pod "pod-secrets-3eb90b2a-6081-4d03-acdc-fdc062fadaea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.891324ms +Apr 29 18:31:41.505: INFO: Pod "pod-secrets-3eb90b2a-6081-4d03-acdc-fdc062fadaea": Phase="Running", Reason="", readiness=true. Elapsed: 2.016972638s +Apr 29 18:31:43.513: INFO: Pod "pod-secrets-3eb90b2a-6081-4d03-acdc-fdc062fadaea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024806852s +STEP: Saw pod success +Apr 29 18:31:43.513: INFO: Pod "pod-secrets-3eb90b2a-6081-4d03-acdc-fdc062fadaea" satisfied condition "Succeeded or Failed" +Apr 29 18:31:43.517: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-secrets-3eb90b2a-6081-4d03-acdc-fdc062fadaea container secret-volume-test: +STEP: delete the pod +Apr 29 18:31:43.555: INFO: Waiting for pod pod-secrets-3eb90b2a-6081-4d03-acdc-fdc062fadaea to disappear +Apr 29 18:31:43.559: INFO: Pod pod-secrets-3eb90b2a-6081-4d03-acdc-fdc062fadaea no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:31:43.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2570" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":48,"skipped":829,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:31:43.575: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Apr 29 18:31:43.630: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:31:45.635: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Apr 29 18:31:45.652: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:31:47.659: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:31:49.659: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Apr 29 18:31:49.678: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Apr 29 18:31:49.682: INFO: Pod pod-with-poststart-exec-hook still exists +Apr 29 18:31:51.683: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Apr 29 18:31:51.689: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:31:51.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-8887" for this suite. + +• [SLOW TEST:8.128 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":49,"skipped":850,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:31:51.704: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Apr 29 18:31:57.826: INFO: The status of Pod kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf is Running (Ready = true) +Apr 29 18:31:58.093: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:31:58.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-7528" for this suite. + +• [SLOW TEST:6.403 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":50,"skipped":885,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:31:58.108: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Apr 29 18:31:58.203: INFO: Number of nodes with available pods: 0 +Apr 29 18:31:58.203: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:31:59.218: INFO: Number of nodes with available pods: 0 +Apr 29 18:31:59.218: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:32:00.215: INFO: Number of nodes with available pods: 2 +Apr 29 18:32:00.215: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Apr 29 18:32:00.247: INFO: Number of nodes with available pods: 1 +Apr 29 18:32:00.247: INFO: Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc is running more than one daemon pod +Apr 29 18:32:01.258: INFO: Number of nodes with available pods: 1 +Apr 29 18:32:01.258: INFO: Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc is running more than one daemon pod +Apr 29 18:32:02.259: INFO: Number of nodes with available pods: 2 +Apr 29 18:32:02.259: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4567, will wait for the garbage collector to delete the pods +Apr 29 18:32:02.329: INFO: Deleting DaemonSet.extensions daemon-set took: 6.175358ms +Apr 29 18:32:02.430: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.307799ms +Apr 29 18:32:05.136: INFO: Number of nodes with available pods: 0 +Apr 29 18:32:05.136: INFO: Number of running nodes: 0, number of available pods: 0 +Apr 29 18:32:05.141: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"720266"},"items":null} + +Apr 29 18:32:05.144: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"720266"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:32:05.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4567" for this suite. + +• [SLOW TEST:7.066 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":51,"skipped":921,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:32:05.174: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:32:05.223: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Apr 29 18:32:10.229: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Apr 29 18:32:10.229: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Apr 29 18:32:12.268: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-4277 a2912cd3-fa4a-4fff-a68f-51c938d67302 720389 1 2022-04-29 18:32:10 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-04-29 18:32:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:32:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005f0fc18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-29 18:32:10 +0000 UTC,LastTransitionTime:2022-04-29 18:32:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2022-04-29 18:32:12 +0000 UTC,LastTransitionTime:2022-04-29 18:32:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Apr 29 18:32:12.272: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-4277 1f24507f-c927-4183-bd4d-99d87a3d01ea 720379 1 2022-04-29 18:32:10 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a2912cd3-fa4a-4fff-a68f-51c938d67302 0xc005f981c7 0xc005f981c8}] [] [{kube-controller-manager Update apps/v1 2022-04-29 18:32:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2912cd3-fa4a-4fff-a68f-51c938d67302\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:32:12 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005f982d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Apr 29 18:32:12.278: INFO: Pod "test-cleanup-deployment-5b4d99b59b-5mf6f" is available: +&Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-5mf6f test-cleanup-deployment-5b4d99b59b- deployment-4277 1b5da934-88da-4746-aa24-14127acdb4ad 720378 0 2022-04-29 18:32:10 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b 1f24507f-c927-4183-bd4d-99d87a3d01ea 0xc005f98907 0xc005f98908}] [] [{kube-controller-manager Update v1 2022-04-29 18:32:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1f24507f-c927-4183-bd4d-99d87a3d01ea\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:32:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.133\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2fsz8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2fsz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:32:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.133,StartTime:2022-04-29 18:32:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:32:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://0bb1c0170430d6dda82751f8a8c6054ff543f072fdc9ae5e91486804669c3a09,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:32:12.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4277" for this suite. + +• [SLOW TEST:7.115 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":52,"skipped":940,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:32:12.289: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 18:32:12.873: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 18:32:15.908: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:32:15.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3932" for this suite. +STEP: Destroying namespace "webhook-3932-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":53,"skipped":951,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:32:16.032: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-7e24bf48-926a-445c-bc87-c2ef35a57f7c in namespace container-probe-7798 +Apr 29 18:32:20.086: INFO: Started pod busybox-7e24bf48-926a-445c-bc87-c2ef35a57f7c in namespace container-probe-7798 +STEP: checking the pod's current state and verifying that restartCount is present +Apr 29 18:32:20.090: INFO: Initial restart count of pod busybox-7e24bf48-926a-445c-bc87-c2ef35a57f7c is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:36:21.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7798" for this suite. + +• [SLOW TEST:245.508 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":54,"skipped":970,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:36:21.543: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Apr 29 18:36:21.596: INFO: Waiting up to 5m0s for pod "pod-6e5d8c5f-a40d-4d25-be98-1bc728e3b119" in namespace "emptydir-1405" to be "Succeeded or Failed" +Apr 29 18:36:21.601: INFO: Pod "pod-6e5d8c5f-a40d-4d25-be98-1bc728e3b119": Phase="Pending", Reason="", readiness=false. Elapsed: 4.69919ms +Apr 29 18:36:23.607: INFO: Pod "pod-6e5d8c5f-a40d-4d25-be98-1bc728e3b119": Phase="Running", Reason="", readiness=true. Elapsed: 2.011099378s +Apr 29 18:36:25.613: INFO: Pod "pod-6e5d8c5f-a40d-4d25-be98-1bc728e3b119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017321041s +STEP: Saw pod success +Apr 29 18:36:25.614: INFO: Pod "pod-6e5d8c5f-a40d-4d25-be98-1bc728e3b119" satisfied condition "Succeeded or Failed" +Apr 29 18:36:25.617: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-6e5d8c5f-a40d-4d25-be98-1bc728e3b119 container test-container: +STEP: delete the pod +Apr 29 18:36:25.645: INFO: Waiting for pod pod-6e5d8c5f-a40d-4d25-be98-1bc728e3b119 to disappear +Apr 29 18:36:25.649: INFO: Pod pod-6e5d8c5f-a40d-4d25-be98-1bc728e3b119 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:36:25.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1405" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":55,"skipped":976,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:36:25.662: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:36:25.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3141" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":56,"skipped":988,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:36:25.762: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption is created +Apr 29 18:36:25.805: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:36:27.811: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:36:29.810: INFO: The status of Pod pod-adoption is Running (Ready = true) +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:36:30.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-249" for this suite. + +• [SLOW TEST:5.081 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":57,"skipped":1005,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:36:30.847: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:36:30.884: INFO: Creating deployment "webserver-deployment" +Apr 29 18:36:30.892: INFO: Waiting for observed generation 1 +Apr 29 18:36:32.904: INFO: Waiting for all required pods to come up +Apr 29 18:36:32.914: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Apr 29 18:36:36.927: INFO: Waiting for deployment "webserver-deployment" to complete +Apr 29 18:36:36.937: INFO: Updating deployment "webserver-deployment" with a non-existent image +Apr 29 18:36:36.949: INFO: Updating deployment webserver-deployment +Apr 29 18:36:36.949: INFO: Waiting for observed generation 2 +Apr 29 18:36:38.960: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Apr 29 18:36:38.965: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Apr 29 18:36:38.971: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Apr 29 18:36:38.983: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Apr 29 18:36:38.983: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Apr 29 18:36:38.986: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Apr 29 18:36:38.992: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Apr 29 18:36:38.993: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Apr 29 18:36:39.005: INFO: Updating deployment webserver-deployment +Apr 29 18:36:39.005: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Apr 29 18:36:39.018: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Apr 29 18:36:39.023: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Apr 29 18:36:39.043: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-8351 dba6a9c4-1354-4fce-a186-956befd17ead 722626 3 2022-04-29 18:36:30 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:36:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002320118 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2022-04-29 18:36:37 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-29 18:36:39 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Apr 29 18:36:39.056: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-8351 071c9e18-239a-41a8-81ae-54ed0ddfc32d 722620 3 2022-04-29 18:36:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment dba6a9c4-1354-4fce-a186-956befd17ead 0xc0033a9707 0xc0033a9708}] [] [{kube-controller-manager Update apps/v1 2022-04-29 18:36:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dba6a9c4-1354-4fce-a186-956befd17ead\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:36:36 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0033a97a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Apr 29 18:36:39.056: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Apr 29 18:36:39.056: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-8351 ccf4b765-8dd6-4047-8371-8ee221f57074 722617 3 2022-04-29 18:36:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment dba6a9c4-1354-4fce-a186-956befd17ead 0xc0033a9807 0xc0033a9808}] [] [{kube-controller-manager Update apps/v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dba6a9c4-1354-4fce-a186-956befd17ead\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:36:33 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0033a9898 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Apr 29 18:36:39.079: INFO: Pod "webserver-deployment-795d758f88-4zjf8" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-4zjf8 webserver-deployment-795d758f88- deployment-8351 22e0fdd2-2021-4048-8665-b25dc79d3005 722662 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc0033a9d87 0xc0033a9d88}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9kbcb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kbcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.081: INFO: Pod "webserver-deployment-795d758f88-586kl" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-586kl webserver-deployment-795d758f88- deployment-8351 6df87764-a0b9-43f6-aa18-c6540fabe758 722564 0 2022-04-29 18:36:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc0033a9ee0 0xc0033a9ee1}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-khfv5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-khfv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:,StartTime:2022-04-29 18:36:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.081: INFO: Pod "webserver-deployment-795d758f88-j5xcq" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-j5xcq webserver-deployment-795d758f88- deployment-8351 f24581c2-0ec1-4acf-96bd-7579af927ad2 722636 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b260b7 0xc003b260b8}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9qk26,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qk26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.082: INFO: Pod "webserver-deployment-795d758f88-khzrh" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-khzrh webserver-deployment-795d758f88- deployment-8351 c8d60ad7-cfd1-475c-af50-43544f471cdb 722569 0 2022-04-29 18:36:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b26220 0xc003b26221}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t7rxk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7rxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:,StartTime:2022-04-29 18:36:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.083: INFO: Pod "webserver-deployment-795d758f88-kn6d8" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-kn6d8 webserver-deployment-795d758f88- deployment-8351 6665954a-4c75-45e9-8557-90c603655fbe 722654 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b263f7 0xc003b263f8}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-r9tqh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9tqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.083: INFO: Pod "webserver-deployment-795d758f88-lvfx2" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-lvfx2 webserver-deployment-795d758f88- deployment-8351 08f91f12-43be-466a-b07d-8d73c1c44719 722666 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b26560 0xc003b26561}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4kzsk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4kzsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.084: INFO: Pod "webserver-deployment-795d758f88-m8vkp" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-m8vkp webserver-deployment-795d758f88- deployment-8351 8db801a7-612e-440a-b189-5322b63b8209 722656 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b266c0 0xc003b266c1}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2kl92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kl92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.084: INFO: Pod "webserver-deployment-795d758f88-pm7xf" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-pm7xf webserver-deployment-795d758f88- deployment-8351 268b818f-6975-4db1-b53f-f1b5c23168b0 722610 0 2022-04-29 18:36:37 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b26810 0xc003b26811}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mh5gf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mh5gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:,StartTime:2022-04-29 18:36:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.085: INFO: Pod "webserver-deployment-795d758f88-pxj8w" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-pxj8w webserver-deployment-795d758f88- deployment-8351 e912ed90-bbc4-48c5-94c2-c6cda986dd3b 722659 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b269e7 0xc003b269e8}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7689d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7689d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.085: INFO: Pod "webserver-deployment-795d758f88-rr98p" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-rr98p webserver-deployment-795d758f88- deployment-8351 7d76e71c-2902-4c2b-bd30-02bd734aba18 722575 0 2022-04-29 18:36:36 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b26b50 0xc003b26b51}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zkx28,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:,StartTime:2022-04-29 18:36:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.086: INFO: Pod "webserver-deployment-795d758f88-ss4g9" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-ss4g9 webserver-deployment-795d758f88- deployment-8351 f9a60915-5710-4255-b805-b396e796fa03 722591 0 2022-04-29 18:36:37 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b26d27 0xc003b26d28}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7jdtv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7jdtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:,StartTime:2022-04-29 18:36:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.086: INFO: Pod "webserver-deployment-795d758f88-wsd4z" is not available: +&Pod{ObjectMeta:{webserver-deployment-795d758f88-wsd4z webserver-deployment-795d758f88- deployment-8351 0fe1a068-d326-41d3-9e76-682644462b52 722639 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 071c9e18-239a-41a8-81ae-54ed0ddfc32d 0xc003b26f07 0xc003b26f08}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071c9e18-239a-41a8-81ae-54ed0ddfc32d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v6d9m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v6d9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.087: INFO: Pod "webserver-deployment-847dcfb7fb-2c6tv" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2c6tv webserver-deployment-847dcfb7fb- deployment-8351 9d8f7409-d6a5-4942-adf3-dcc56bf63e76 722663 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b27070 0xc003b27071}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xltkz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xltkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:,StartTime:2022-04-29 18:36:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.088: INFO: Pod "webserver-deployment-847dcfb7fb-57gqd" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-57gqd webserver-deployment-847dcfb7fb- deployment-8351 17af4206-0c9f-4e1d-8123-9f1356a1dfde 722465 0 2022-04-29 18:36:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b27227 0xc003b27228}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vrt6d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vrt6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:100.96.0.103,StartTime:2022-04-29 18:36:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:36:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://de7836a64b34c9232b36125faa3635dad612105294a38ccd997cac5171887289,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.088: INFO: Pod "webserver-deployment-847dcfb7fb-5h67n" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5h67n webserver-deployment-847dcfb7fb- deployment-8351 1f31885b-ef76-40c2-9f1e-239549f6b676 722643 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b27407 0xc003b27408}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q28t7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q28t7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.088: INFO: Pod "webserver-deployment-847dcfb7fb-5qwnl" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-5qwnl webserver-deployment-847dcfb7fb- deployment-8351 3a7e6436-b5af-4886-a257-c054d9e2765b 722664 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b27560 0xc003b27561}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tr8lv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tr8lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.089: INFO: Pod "webserver-deployment-847dcfb7fb-7bst9" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7bst9 webserver-deployment-847dcfb7fb- deployment-8351 0ffc8f8c-5b59-4ed6-8323-54b1796ac053 722661 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b276a0 0xc003b276a1}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vr5ks,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vr5ks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.090: INFO: Pod "webserver-deployment-847dcfb7fb-7n55m" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7n55m webserver-deployment-847dcfb7fb- deployment-8351 f85c9f70-c502-40f2-937d-3ff2f5150547 722475 0 2022-04-29 18:36:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b277e0 0xc003b277e1}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gh96z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gh96z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.139,StartTime:2022-04-29 18:36:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:36:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://b5fa8c606c66d62c1d81b3836c60f11b0abe4891032e7251a072ab32e5af372d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.090: INFO: Pod "webserver-deployment-847dcfb7fb-7zgsl" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7zgsl webserver-deployment-847dcfb7fb- deployment-8351 7feff55b-1e62-4bbd-b1a4-08479f7a98c1 722468 0 2022-04-29 18:36:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b279b7 0xc003b279b8}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.104\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hv422,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hv422,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:100.96.0.104,StartTime:2022-04-29 18:36:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:36:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://ba0776f97b966b6e6de9532d5be9812323841dbf3b357c808ed3a2bc2d0f4e72,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.091: INFO: Pod "webserver-deployment-847dcfb7fb-blqgt" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-blqgt webserver-deployment-847dcfb7fb- deployment-8351 57768c16-1462-4930-a8a1-f8001b384dc6 722471 0 2022-04-29 18:36:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b27b97 0xc003b27b98}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.102\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ncgz5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ncgz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:100.96.0.102,StartTime:2022-04-29 18:36:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:36:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://fe455ec31391caa9a537c7733db62756dc09c22065ead9921077c730821fc647,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.102,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.091: INFO: Pod "webserver-deployment-847dcfb7fb-cc68k" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-cc68k webserver-deployment-847dcfb7fb- deployment-8351 504e2b96-b2ef-4559-8c7b-d1d3954c9b15 722485 0 2022-04-29 18:36:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b27d77 0xc003b27d78}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.138\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-npglq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-npglq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.138,StartTime:2022-04-29 18:36:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:36:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://a80f5f68e4b0a0a5f558412e2a702861fb6c5d4d7efdc35bcee26eade94ac416,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.092: INFO: Pod "webserver-deployment-847dcfb7fb-f44rn" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-f44rn webserver-deployment-847dcfb7fb- deployment-8351 997ce7be-6c00-4874-b1c7-1a07da75da56 722665 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003b27f67 0xc003b27f68}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qsb6g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qsb6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.092: INFO: Pod "webserver-deployment-847dcfb7fb-grrjv" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-grrjv webserver-deployment-847dcfb7fb- deployment-8351 69a08944-143c-4125-aac3-2910a78e23cb 722658 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc0038440c0 0xc0038440c1}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m46kg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m46kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.093: INFO: Pod "webserver-deployment-847dcfb7fb-gsqkp" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gsqkp webserver-deployment-847dcfb7fb- deployment-8351 9bec64e4-5ad0-446f-bba1-d80a8332778b 722477 0 2022-04-29 18:36:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003844210 0xc003844211}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.105\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p4dcm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4dcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:100.96.0.105,StartTime:2022-04-29 18:36:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:36:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://2e467381efde34c985c54ac023dfc65e65c24ef873edcce27418b860ce31e044,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.094: INFO: Pod "webserver-deployment-847dcfb7fb-h76hg" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-h76hg webserver-deployment-847dcfb7fb- deployment-8351 d1fc0498-4679-448f-b15d-3f8ef2c720ce 722630 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc0038443e7 0xc0038443e8}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5crqp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5crqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.094: INFO: Pod "webserver-deployment-847dcfb7fb-hn2ws" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hn2ws webserver-deployment-847dcfb7fb- deployment-8351 29146837-f6aa-41d7-b214-34d7a593e117 722641 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003844540 0xc003844541}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2ffw8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2ffw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.095: INFO: Pod "webserver-deployment-847dcfb7fb-m7jmh" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-m7jmh webserver-deployment-847dcfb7fb- deployment-8351 5650fb92-1d2f-4cde-800f-d5bbc3d643c9 722655 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003844690 0xc003844691}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gkh57,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkh57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.096: INFO: Pod "webserver-deployment-847dcfb7fb-pmmd2" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-pmmd2 webserver-deployment-847dcfb7fb- deployment-8351 6b9d2d36-8794-4300-b143-26de1bcf76a4 722647 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc0038447e0 0xc0038447e1}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lpkgj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpkgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:,StartTime:2022-04-29 18:36:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.096: INFO: Pod "webserver-deployment-847dcfb7fb-snwlg" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-snwlg webserver-deployment-847dcfb7fb- deployment-8351 3cc29900-d172-4f4a-bf2b-2572ffbfe616 722657 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003844997 0xc003844998}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4w52f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4w52f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.097: INFO: Pod "webserver-deployment-847dcfb7fb-vcf2d" is not available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vcf2d webserver-deployment-847dcfb7fb- deployment-8351 e32eee7f-6f44-4bc1-92ea-00e13ca2425d 722642 0 2022-04-29 18:36:39 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003844ae0 0xc003844ae1}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fjmp9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjmp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.097: INFO: Pod "webserver-deployment-847dcfb7fb-w5zw8" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-w5zw8 webserver-deployment-847dcfb7fb- deployment-8351 483b341a-e4fe-4a9a-8337-56515fbe1feb 722504 0 2022-04-29 18:36:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003844c30 0xc003844c31}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.141\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gxzrq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gxzrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.141,StartTime:2022-04-29 18:36:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:36:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://e58def20d394af2ea81ca842e803e8e14bc2d23b1f2f7b861ea5f1b3eff7bb0a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.141,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:36:39.098: INFO: Pod "webserver-deployment-847dcfb7fb-zsb5t" is available: +&Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-zsb5t webserver-deployment-847dcfb7fb- deployment-8351 2f1c02a7-1710-4865-8557-7da7314280a6 722493 0 2022-04-29 18:36:30 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ccf4b765-8dd6-4047-8371-8ee221f57074 0xc003844e07 0xc003844e08}] [] [{kube-controller-manager Update v1 2022-04-29 18:36:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccf4b765-8dd6-4047-8371-8ee221f57074\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:36:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.140\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8kbqr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kbqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:36:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.140,StartTime:2022-04-29 18:36:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:36:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://c20b34538fc27485ffa6fe7cc376ccffc0d3aa6ba95e7e4b74e963a38bd3444f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:36:39.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8351" for this suite. + +• [SLOW TEST:8.280 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":58,"skipped":1038,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:36:39.127: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:36:39.208: INFO: The status of Pod pod-secrets-66c16016-ad15-4da4-a0ed-e37b5078c0e8 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:36:41.215: INFO: The status of Pod pod-secrets-66c16016-ad15-4da4-a0ed-e37b5078c0e8 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:36:43.219: INFO: The status of Pod pod-secrets-66c16016-ad15-4da4-a0ed-e37b5078c0e8 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:36:45.215: INFO: The status of Pod pod-secrets-66c16016-ad15-4da4-a0ed-e37b5078c0e8 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:36:47.213: INFO: The status of Pod pod-secrets-66c16016-ad15-4da4-a0ed-e37b5078c0e8 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:36:49.213: INFO: The status of Pod pod-secrets-66c16016-ad15-4da4-a0ed-e37b5078c0e8 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:36:51.214: INFO: The status of Pod pod-secrets-66c16016-ad15-4da4-a0ed-e37b5078c0e8 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:36:53.214: INFO: The status of Pod pod-secrets-66c16016-ad15-4da4-a0ed-e37b5078c0e8 is Running (Ready = true) +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:36:53.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-910" for this suite. + +• [SLOW TEST:14.137 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":59,"skipped":1039,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:36:53.265: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:37:06.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7921" for this suite. + +• [SLOW TEST:13.137 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":60,"skipped":1044,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:37:06.405: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Pods Set QOS Class + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:37:06.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5828" for this suite. +•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":61,"skipped":1103,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:37:06.498: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:37:23.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-1910" for this suite. + +• [SLOW TEST:17.097 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":62,"skipped":1110,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:37:23.595: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service multi-endpoint-test in namespace services-680 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-680 to expose endpoints map[] +Apr 29 18:37:23.644: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found +Apr 29 18:37:24.655: INFO: successfully validated that service multi-endpoint-test in namespace services-680 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-680 +Apr 29 18:37:24.667: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:37:26.673: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-680 to expose endpoints map[pod1:[100]] +Apr 29 18:37:26.690: INFO: successfully validated that service multi-endpoint-test in namespace services-680 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-680 +Apr 29 18:37:26.706: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:37:28.716: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-680 to expose endpoints map[pod1:[100] pod2:[101]] +Apr 29 18:37:28.739: INFO: successfully validated that service multi-endpoint-test in namespace services-680 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods +Apr 29 18:37:28.739: INFO: Creating new exec pod +Apr 29 18:37:31.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-680 exec execpod6rdw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Apr 29 18:37:32.042: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Apr 29 18:37:32.043: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 18:37:32.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-680 exec execpod6rdw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.124.80 80' +Apr 29 18:37:32.238: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.124.80 80\nConnection to 100.67.124.80 80 port [tcp/http] succeeded!\n" +Apr 29 18:37:32.238: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 18:37:32.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-680 exec execpod6rdw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Apr 29 18:37:32.433: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Apr 29 18:37:32.433: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 18:37:32.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-680 exec execpod6rdw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.67.124.80 81' +Apr 29 18:37:32.619: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.67.124.80 81\nConnection to 100.67.124.80 81 port [tcp/*] succeeded!\n" +Apr 29 18:37:32.619: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-680 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-680 to expose endpoints map[pod2:[101]] +Apr 29 18:37:32.643: INFO: successfully validated that service multi-endpoint-test in namespace services-680 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-680 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-680 to expose endpoints map[] +Apr 29 18:37:32.670: INFO: successfully validated that service multi-endpoint-test in namespace services-680 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:37:32.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-680" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:9.113 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":346,"completed":63,"skipped":1126,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:37:32.708: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 18:37:33.071: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 18:37:36.096: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Apr 29 18:37:36.127: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:37:36.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8039" for this suite. +STEP: Destroying namespace "webhook-8039-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":64,"skipped":1131,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:37:36.214: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Apr 29 18:37:36.256: INFO: Waiting up to 5m0s for pod "pod-329b8fe6-457f-4ebb-9d00-876caa570c2f" in namespace "emptydir-4549" to be "Succeeded or Failed" +Apr 29 18:37:36.264: INFO: Pod "pod-329b8fe6-457f-4ebb-9d00-876caa570c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.483488ms +Apr 29 18:37:38.271: INFO: Pod "pod-329b8fe6-457f-4ebb-9d00-876caa570c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014795705s +Apr 29 18:37:40.276: INFO: Pod "pod-329b8fe6-457f-4ebb-9d00-876caa570c2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020203569s +STEP: Saw pod success +Apr 29 18:37:40.276: INFO: Pod "pod-329b8fe6-457f-4ebb-9d00-876caa570c2f" satisfied condition "Succeeded or Failed" +Apr 29 18:37:40.280: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-329b8fe6-457f-4ebb-9d00-876caa570c2f container test-container: +STEP: delete the pod +Apr 29 18:37:40.301: INFO: Waiting for pod pod-329b8fe6-457f-4ebb-9d00-876caa570c2f to disappear +Apr 29 18:37:40.305: INFO: Pod pod-329b8fe6-457f-4ebb-9d00-876caa570c2f no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:37:40.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4549" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":65,"skipped":1133,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:37:40.318: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-7607 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:37:40.378: INFO: Found 0 stateful pods, waiting for 1 +Apr 29 18:37:50.384: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet +Apr 29 18:37:50.408: INFO: Found 1 stateful pods, waiting for 2 +Apr 29 18:38:00.413: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 18:38:00.413: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets +STEP: Delete all of the StatefulSets +STEP: Verify that StatefulSets have been deleted +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Apr 29 18:38:00.439: INFO: Deleting all statefulset in ns statefulset-7607 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:38:00.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7607" for this suite. + +• [SLOW TEST:20.162 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":66,"skipped":1188,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:38:00.480: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Apr 29 18:38:00.582: INFO: Number of nodes with available pods: 0 +Apr 29 18:38:00.582: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:38:01.596: INFO: Number of nodes with available pods: 0 +Apr 29 18:38:01.596: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:38:02.594: INFO: Number of nodes with available pods: 2 +Apr 29 18:38:02.594: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Getting /status +Apr 29 18:38:02.610: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status +Apr 29 18:38:02.624: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated +Apr 29 18:38:02.630: INFO: Observed &DaemonSet event: ADDED +Apr 29 18:38:02.630: INFO: Observed &DaemonSet event: MODIFIED +Apr 29 18:38:02.630: INFO: Observed &DaemonSet event: MODIFIED +Apr 29 18:38:02.631: INFO: Observed &DaemonSet event: MODIFIED +Apr 29 18:38:02.631: INFO: Observed &DaemonSet event: MODIFIED +Apr 29 18:38:02.631: INFO: Found daemon set daemon-set in namespace daemonsets-1469 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Apr 29 18:38:02.631: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status +STEP: watching for the daemon set status to be patched +Apr 29 18:38:02.644: INFO: Observed &DaemonSet event: ADDED +Apr 29 18:38:02.645: INFO: Observed &DaemonSet event: MODIFIED +Apr 29 18:38:02.645: INFO: Observed &DaemonSet event: MODIFIED +Apr 29 18:38:02.645: INFO: Observed &DaemonSet event: MODIFIED +Apr 29 18:38:02.646: INFO: Observed &DaemonSet event: MODIFIED +Apr 29 18:38:02.646: INFO: Observed daemon set daemon-set in namespace daemonsets-1469 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Apr 29 18:38:02.646: INFO: Observed &DaemonSet event: MODIFIED +Apr 29 18:38:02.646: INFO: Found daemon set daemon-set in namespace daemonsets-1469 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Apr 29 18:38:02.646: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1469, will wait for the garbage collector to delete the pods +Apr 29 18:38:02.712: INFO: Deleting DaemonSet.extensions daemon-set took: 6.106939ms +Apr 29 18:38:02.813: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.096323ms +Apr 29 18:38:05.617: INFO: Number of nodes with available pods: 0 +Apr 29 18:38:05.617: INFO: Number of running nodes: 0, number of available pods: 0 +Apr 29 18:38:05.620: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"723878"},"items":null} + +Apr 29 18:38:05.624: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"723878"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:38:05.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1469" for this suite. + +• [SLOW TEST:5.169 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":67,"skipped":1201,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:38:05.650: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-b9546016-aadb-455a-afcb-ced34922d716 in namespace container-probe-7786 +Apr 29 18:38:07.700: INFO: Started pod liveness-b9546016-aadb-455a-afcb-ced34922d716 in namespace container-probe-7786 +STEP: checking the pod's current state and verifying that restartCount is present +Apr 29 18:38:07.704: INFO: Initial restart count of pod liveness-b9546016-aadb-455a-afcb-ced34922d716 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:08.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7786" for this suite. + +• [SLOW TEST:243.056 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":68,"skipped":1235,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:08.715: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Service +STEP: watching for the Service to be added +Apr 29 18:42:08.794: INFO: Found Service test-service-bgmj7 in namespace services-8189 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Apr 29 18:42:08.794: INFO: Service test-service-bgmj7 created +STEP: Getting /status +Apr 29 18:42:08.805: INFO: Service test-service-bgmj7 has LoadBalancer: {[]} +STEP: patching the ServiceStatus +STEP: watching for the Service to be patched +Apr 29 18:42:08.817: INFO: observed Service test-service-bgmj7 in namespace services-8189 with annotations: map[] & LoadBalancer: {[]} +Apr 29 18:42:08.817: INFO: Found Service test-service-bgmj7 in namespace services-8189 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Apr 29 18:42:08.817: INFO: Service test-service-bgmj7 has service status patched +STEP: updating the ServiceStatus +Apr 29 18:42:08.831: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated +Apr 29 18:42:08.835: INFO: Observed Service test-service-bgmj7 in namespace services-8189 with annotations: map[] & Conditions: {[]} +Apr 29 18:42:08.835: INFO: Observed event: &Service{ObjectMeta:{test-service-bgmj7 services-8189 495a36ed-4601-4640-a070-bcec87d6638d 725623 0 2022-04-29 18:42:08 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-04-29 18:42:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2022-04-29 18:42:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:100.71.5.181,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[100.71.5.181],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Apr 29 18:42:08.838: INFO: Found Service test-service-bgmj7 in namespace services-8189 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Apr 29 18:42:08.839: INFO: Service test-service-bgmj7 has service status updated +STEP: patching the service +STEP: watching for the Service to be patched +Apr 29 18:42:08.851: INFO: observed Service test-service-bgmj7 in namespace services-8189 with labels: map[test-service-static:true] +Apr 29 18:42:08.852: INFO: observed Service test-service-bgmj7 in namespace services-8189 with labels: map[test-service-static:true] +Apr 29 18:42:08.852: INFO: observed Service test-service-bgmj7 in namespace services-8189 with labels: map[test-service-static:true] +Apr 29 18:42:08.852: INFO: Found Service test-service-bgmj7 in namespace services-8189 with labels: map[test-service:patched test-service-static:true] +Apr 29 18:42:08.852: INFO: Service test-service-bgmj7 patched +STEP: deleting the service +STEP: watching for the Service to be deleted +Apr 29 18:42:08.875: INFO: Observed event: ADDED +Apr 29 18:42:08.875: INFO: Observed event: MODIFIED +Apr 29 18:42:08.875: INFO: Observed event: MODIFIED +Apr 29 18:42:08.875: INFO: Observed event: MODIFIED +Apr 29 18:42:08.875: INFO: Found Service test-service-bgmj7 in namespace services-8189 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Apr 29 18:42:08.875: INFO: Service test-service-bgmj7 deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:08.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8189" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":69,"skipped":1272,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:08.898: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Apr 29 18:42:11.986: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:12.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-3244" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":70,"skipped":1279,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:12.014: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a Replicaset +STEP: Verify that the required pods have come up. +Apr 29 18:42:12.072: INFO: Pod name sample-pod: Found 0 pods out of 1 +Apr 29 18:42:17.080: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Getting /status +Apr 29 18:42:17.086: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status +Apr 29 18:42:17.105: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated +Apr 29 18:42:17.110: INFO: Observed &ReplicaSet event: ADDED +Apr 29 18:42:17.110: INFO: Observed &ReplicaSet event: MODIFIED +Apr 29 18:42:17.110: INFO: Observed &ReplicaSet event: MODIFIED +Apr 29 18:42:17.111: INFO: Observed &ReplicaSet event: MODIFIED +Apr 29 18:42:17.111: INFO: Found replicaset test-rs in namespace replicaset-6222 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Apr 29 18:42:17.112: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status +Apr 29 18:42:17.112: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Apr 29 18:42:17.123: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched +Apr 29 18:42:17.126: INFO: Observed &ReplicaSet event: ADDED +Apr 29 18:42:17.126: INFO: Observed &ReplicaSet event: MODIFIED +Apr 29 18:42:17.126: INFO: Observed &ReplicaSet event: MODIFIED +Apr 29 18:42:17.127: INFO: Observed &ReplicaSet event: MODIFIED +Apr 29 18:42:17.127: INFO: Observed replicaset test-rs in namespace replicaset-6222 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Apr 29 18:42:17.128: INFO: Observed &ReplicaSet event: MODIFIED +Apr 29 18:42:17.128: INFO: Found replicaset test-rs in namespace replicaset-6222 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Apr 29 18:42:17.128: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:17.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-6222" for this suite. + +• [SLOW TEST:5.124 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":71,"skipped":1286,"failed":0} +SS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:17.139: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 18:42:18.193: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Apr 29 18:42:20.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854538, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854538, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854538, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854538, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 18:42:23.234: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:23.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5167" for this suite. +STEP: Destroying namespace "webhook-5167-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:6.253 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":72,"skipped":1288,"failed":0} +S +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:23.393: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replication controller my-hostname-basic-2091607f-63ca-46fa-b7eb-71e665eb7368 +Apr 29 18:42:23.441: INFO: Pod name my-hostname-basic-2091607f-63ca-46fa-b7eb-71e665eb7368: Found 0 pods out of 1 +Apr 29 18:42:28.447: INFO: Pod name my-hostname-basic-2091607f-63ca-46fa-b7eb-71e665eb7368: Found 1 pods out of 1 +Apr 29 18:42:28.448: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2091607f-63ca-46fa-b7eb-71e665eb7368" are running +Apr 29 18:42:28.453: INFO: Pod "my-hostname-basic-2091607f-63ca-46fa-b7eb-71e665eb7368-h6mkd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 18:42:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 18:42:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 18:42:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 18:42:23 +0000 UTC Reason: Message:}]) +Apr 29 18:42:28.453: INFO: Trying to dial the pod +Apr 29 18:42:33.471: INFO: Controller my-hostname-basic-2091607f-63ca-46fa-b7eb-71e665eb7368: Got expected result from replica 1 [my-hostname-basic-2091607f-63ca-46fa-b7eb-71e665eb7368-h6mkd]: "my-hostname-basic-2091607f-63ca-46fa-b7eb-71e665eb7368-h6mkd", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:33.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-2288" for this suite. + +• [SLOW TEST:10.089 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":73,"skipped":1289,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:33.484: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating cluster-info +Apr 29 18:42:33.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-8171 cluster-info' +Apr 29 18:42:34.036: INFO: stderr: "" +Apr 29 18:42:34.036: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://100.64.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:34.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8171" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":346,"completed":74,"skipped":1313,"failed":0} +SSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:34.049: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Apr 29 18:42:34.096: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:38.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-162" for this suite. +•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":75,"skipped":1318,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:38.432: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename tables +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:38.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-9038" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":76,"skipped":1344,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:38.505: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting the proxy server +Apr 29 18:42:38.554: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-6148 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:38.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6148" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":346,"completed":77,"skipped":1349,"failed":0} +SSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:38.629: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota +Apr 29 18:42:38.670: INFO: Pod name sample-pod: Found 0 pods out of 1 +Apr 29 18:42:43.675: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the replicaset Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:43.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5504" for this suite. + +• [SLOW TEST:5.092 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":78,"skipped":1353,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:43.723: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Namespace +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:43.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-7118" for this suite. +STEP: Destroying namespace "nspatchtest-3e97d644-4e31-4f7f-b4c5-dd843fbcd476-105" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":79,"skipped":1392,"failed":0} +SSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:43.926: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Apr 29 18:42:43.992: INFO: Waiting up to 5m0s for pod "downward-api-0e89333b-4629-43db-8644-a2bdf96ce45c" in namespace "downward-api-8169" to be "Succeeded or Failed" +Apr 29 18:42:43.996: INFO: Pod "downward-api-0e89333b-4629-43db-8644-a2bdf96ce45c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.619829ms +Apr 29 18:42:46.004: INFO: Pod "downward-api-0e89333b-4629-43db-8644-a2bdf96ce45c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011635763s +STEP: Saw pod success +Apr 29 18:42:46.004: INFO: Pod "downward-api-0e89333b-4629-43db-8644-a2bdf96ce45c" satisfied condition "Succeeded or Failed" +Apr 29 18:42:46.009: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downward-api-0e89333b-4629-43db-8644-a2bdf96ce45c container dapi-container: +STEP: delete the pod +Apr 29 18:42:46.043: INFO: Waiting for pod downward-api-0e89333b-4629-43db-8644-a2bdf96ce45c to disappear +Apr 29 18:42:46.047: INFO: Pod downward-api-0e89333b-4629-43db-8644-a2bdf96ce45c no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:46.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8169" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":80,"skipped":1398,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:46.060: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:42:46.131: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"fa1151b5-302b-4160-9092-e6876d9514f4", Controller:(*bool)(0xc003103e5a), BlockOwnerDeletion:(*bool)(0xc003103e5b)}} +Apr 29 18:42:46.138: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a412c35d-f9c6-481d-a159-35972dd89017", Controller:(*bool)(0xc0037394da), BlockOwnerDeletion:(*bool)(0xc0037394db)}} +Apr 29 18:42:46.147: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ea3ecf5b-5ef2-4d8e-ba79-de12d96f9c15", Controller:(*bool)(0xc0031a6372), BlockOwnerDeletion:(*bool)(0xc0031a6373)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:51.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1404" for this suite. + +• [SLOW TEST:5.113 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":81,"skipped":1420,"failed":0} +SSSSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:51.173: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Updating PodDisruptionBudget status +STEP: Waiting for all pods to be running +Apr 29 18:42:53.246: INFO: running pods: 0 < 1 +STEP: locating a running pod +STEP: Waiting for the pdb to be processed +STEP: Patching PodDisruptionBudget status +STEP: Waiting for the pdb to be processed +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:55.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-9655" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":82,"skipped":1425,"failed":0} +SSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:55.316: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-945d1ed7-805f-46e9-aa7f-4dc8e5fb875c +STEP: Creating a pod to test consume configMaps +Apr 29 18:42:55.367: INFO: Waiting up to 5m0s for pod "pod-configmaps-38bbc3dc-f4de-4b92-94aa-3eeebc0d24d3" in namespace "configmap-8519" to be "Succeeded or Failed" +Apr 29 18:42:55.372: INFO: Pod "pod-configmaps-38bbc3dc-f4de-4b92-94aa-3eeebc0d24d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.971301ms +Apr 29 18:42:57.379: INFO: Pod "pod-configmaps-38bbc3dc-f4de-4b92-94aa-3eeebc0d24d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012055793s +Apr 29 18:42:59.386: INFO: Pod "pod-configmaps-38bbc3dc-f4de-4b92-94aa-3eeebc0d24d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018447073s +STEP: Saw pod success +Apr 29 18:42:59.386: INFO: Pod "pod-configmaps-38bbc3dc-f4de-4b92-94aa-3eeebc0d24d3" satisfied condition "Succeeded or Failed" +Apr 29 18:42:59.390: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-38bbc3dc-f4de-4b92-94aa-3eeebc0d24d3 container agnhost-container: +STEP: delete the pod +Apr 29 18:42:59.413: INFO: Waiting for pod pod-configmaps-38bbc3dc-f4de-4b92-94aa-3eeebc0d24d3 to disappear +Apr 29 18:42:59.418: INFO: Pod pod-configmaps-38bbc3dc-f4de-4b92-94aa-3eeebc0d24d3 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:42:59.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8519" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":83,"skipped":1429,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:42:59.435: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Apr 29 18:42:59.490: INFO: Waiting up to 5m0s for pod "downward-api-3c6bf3df-c9cb-4b9d-9aa5-6390dd74f521" in namespace "downward-api-2351" to be "Succeeded or Failed" +Apr 29 18:42:59.498: INFO: Pod "downward-api-3c6bf3df-c9cb-4b9d-9aa5-6390dd74f521": Phase="Pending", Reason="", readiness=false. Elapsed: 7.965882ms +Apr 29 18:43:01.505: INFO: Pod "downward-api-3c6bf3df-c9cb-4b9d-9aa5-6390dd74f521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014890032s +STEP: Saw pod success +Apr 29 18:43:01.505: INFO: Pod "downward-api-3c6bf3df-c9cb-4b9d-9aa5-6390dd74f521" satisfied condition "Succeeded or Failed" +Apr 29 18:43:01.509: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downward-api-3c6bf3df-c9cb-4b9d-9aa5-6390dd74f521 container dapi-container: +STEP: delete the pod +Apr 29 18:43:01.533: INFO: Waiting for pod downward-api-3c6bf3df-c9cb-4b9d-9aa5-6390dd74f521 to disappear +Apr 29 18:43:01.537: INFO: Pod downward-api-3c6bf3df-c9cb-4b9d-9aa5-6390dd74f521 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:43:01.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2351" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":84,"skipped":1436,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:43:01.553: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-ca405c33-0e05-49aa-ad2d-f8d55d452853 +STEP: Creating configMap with name cm-test-opt-upd-54c273f0-1a10-459b-b0fc-4ef80c25439f +STEP: Creating the pod +Apr 29 18:43:01.623: INFO: The status of Pod pod-projected-configmaps-5a32878a-bf6c-4323-b2ab-edd4618a7b6c is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:43:03.630: INFO: The status of Pod pod-projected-configmaps-5a32878a-bf6c-4323-b2ab-edd4618a7b6c is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:43:05.630: INFO: The status of Pod pod-projected-configmaps-5a32878a-bf6c-4323-b2ab-edd4618a7b6c is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-ca405c33-0e05-49aa-ad2d-f8d55d452853 +STEP: Updating configmap cm-test-opt-upd-54c273f0-1a10-459b-b0fc-4ef80c25439f +STEP: Creating configMap with name cm-test-opt-create-d38530b9-73df-49a2-aa27-e827eb8fced8 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:12.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9881" for this suite. + +• [SLOW TEST:70.562 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":85,"skipped":1515,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:12.117: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Apr 29 18:44:12.169: INFO: Waiting up to 5m0s for pod "pod-d1ca47e4-f67c-40bd-b00c-43cdb1e5b238" in namespace "emptydir-1285" to be "Succeeded or Failed" +Apr 29 18:44:12.175: INFO: Pod "pod-d1ca47e4-f67c-40bd-b00c-43cdb1e5b238": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340434ms +Apr 29 18:44:14.183: INFO: Pod "pod-d1ca47e4-f67c-40bd-b00c-43cdb1e5b238": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014343031s +STEP: Saw pod success +Apr 29 18:44:14.183: INFO: Pod "pod-d1ca47e4-f67c-40bd-b00c-43cdb1e5b238" satisfied condition "Succeeded or Failed" +Apr 29 18:44:14.188: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-d1ca47e4-f67c-40bd-b00c-43cdb1e5b238 container test-container: +STEP: delete the pod +Apr 29 18:44:14.210: INFO: Waiting for pod pod-d1ca47e4-f67c-40bd-b00c-43cdb1e5b238 to disappear +Apr 29 18:44:14.214: INFO: Pod pod-d1ca47e4-f67c-40bd-b00c-43cdb1e5b238 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:14.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1285" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":86,"skipped":1545,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:14.237: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:44:16.312: INFO: Deleting pod "var-expansion-35ca0ec6-759e-4f3e-8c52-2c6adcac4a1c" in namespace "var-expansion-4322" +Apr 29 18:44:16.317: INFO: Wait up to 5m0s for pod "var-expansion-35ca0ec6-759e-4f3e-8c52-2c6adcac4a1c" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:18.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4322" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":87,"skipped":1558,"failed":0} +SS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:18.345: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-5e3fbe90-7c5a-45fa-88d0-b2a8796fe823 +STEP: Creating a pod to test consume configMaps +Apr 29 18:44:18.407: INFO: Waiting up to 5m0s for pod "pod-configmaps-54cfe952-c78a-4c2c-b40d-36b610da41a5" in namespace "configmap-8687" to be "Succeeded or Failed" +Apr 29 18:44:18.413: INFO: Pod "pod-configmaps-54cfe952-c78a-4c2c-b40d-36b610da41a5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.910554ms +Apr 29 18:44:20.421: INFO: Pod "pod-configmaps-54cfe952-c78a-4c2c-b40d-36b610da41a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012967062s +STEP: Saw pod success +Apr 29 18:44:20.421: INFO: Pod "pod-configmaps-54cfe952-c78a-4c2c-b40d-36b610da41a5" satisfied condition "Succeeded or Failed" +Apr 29 18:44:20.426: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-54cfe952-c78a-4c2c-b40d-36b610da41a5 container configmap-volume-test: +STEP: delete the pod +Apr 29 18:44:20.448: INFO: Waiting for pod pod-configmaps-54cfe952-c78a-4c2c-b40d-36b610da41a5 to disappear +Apr 29 18:44:20.451: INFO: Pod pod-configmaps-54cfe952-c78a-4c2c-b40d-36b610da41a5 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:20.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8687" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":88,"skipped":1560,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:20.470: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create a ReplicaSet +STEP: Verify that the required pods have come up +Apr 29 18:44:20.525: INFO: Pod name sample-pod: Found 0 pods out of 3 +Apr 29 18:44:25.532: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running +Apr 29 18:44:25.537: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets +STEP: DeleteCollection of the ReplicaSets +STEP: After DeleteCollection verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:25.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5170" for this suite. + +• [SLOW TEST:5.112 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":89,"skipped":1597,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:25.582: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Apr 29 18:44:25.680: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8834 6a7fb0a4-0ebf-4f5b-800b-9113f34008b5 727282 0 2022-04-29 18:44:25 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-29 18:44:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 18:44:25.682: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8834 6a7fb0a4-0ebf-4f5b-800b-9113f34008b5 727283 0 2022-04-29 18:44:25 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-29 18:44:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:25.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8834" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":90,"skipped":1607,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:25.703: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:25.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-7754" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":91,"skipped":1633,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:25.779: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-projected-4ftv +STEP: Creating a pod to test atomic-volume-subpath +Apr 29 18:44:25.952: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4ftv" in namespace "subpath-3399" to be "Succeeded or Failed" +Apr 29 18:44:25.964: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Pending", Reason="", readiness=false. Elapsed: 11.35976ms +Apr 29 18:44:27.971: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 2.018867551s +Apr 29 18:44:29.979: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 4.026340661s +Apr 29 18:44:31.986: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 6.033537461s +Apr 29 18:44:33.993: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 8.040300324s +Apr 29 18:44:36.001: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 10.04844814s +Apr 29 18:44:38.008: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 12.056105029s +Apr 29 18:44:40.015: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 14.063112015s +Apr 29 18:44:42.022: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 16.069518017s +Apr 29 18:44:44.028: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 18.075319614s +Apr 29 18:44:46.035: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Running", Reason="", readiness=true. Elapsed: 20.082953147s +Apr 29 18:44:48.043: INFO: Pod "pod-subpath-test-projected-4ftv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.090351156s +STEP: Saw pod success +Apr 29 18:44:48.043: INFO: Pod "pod-subpath-test-projected-4ftv" satisfied condition "Succeeded or Failed" +Apr 29 18:44:48.047: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-subpath-test-projected-4ftv container test-container-subpath-projected-4ftv: +STEP: delete the pod +Apr 29 18:44:48.067: INFO: Waiting for pod pod-subpath-test-projected-4ftv to disappear +Apr 29 18:44:48.071: INFO: Pod pod-subpath-test-projected-4ftv no longer exists +STEP: Deleting pod pod-subpath-test-projected-4ftv +Apr 29 18:44:48.071: INFO: Deleting pod "pod-subpath-test-projected-4ftv" in namespace "subpath-3399" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:48.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-3399" for this suite. + +• [SLOW TEST:22.314 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":92,"skipped":1663,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:48.098: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:48.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-848" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":346,"completed":93,"skipped":1687,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:48.160: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 18:44:48.209: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ad4b45c-b065-4e04-94b4-7b176ff542d4" in namespace "projected-8700" to be "Succeeded or Failed" +Apr 29 18:44:48.214: INFO: Pod "downwardapi-volume-3ad4b45c-b065-4e04-94b4-7b176ff542d4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.377139ms +Apr 29 18:44:50.226: INFO: Pod "downwardapi-volume-3ad4b45c-b065-4e04-94b4-7b176ff542d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017489817s +STEP: Saw pod success +Apr 29 18:44:50.227: INFO: Pod "downwardapi-volume-3ad4b45c-b065-4e04-94b4-7b176ff542d4" satisfied condition "Succeeded or Failed" +Apr 29 18:44:50.232: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-3ad4b45c-b065-4e04-94b4-7b176ff542d4 container client-container: +STEP: delete the pod +Apr 29 18:44:50.252: INFO: Waiting for pod downwardapi-volume-3ad4b45c-b065-4e04-94b4-7b176ff542d4 to disappear +Apr 29 18:44:50.255: INFO: Pod downwardapi-volume-3ad4b45c-b065-4e04-94b4-7b176ff542d4 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:50.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8700" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":94,"skipped":1692,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:50.268: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:44:50.302: INFO: Creating simple deployment test-new-deployment +Apr 29 18:44:50.314: INFO: deployment "test-new-deployment" doesn't have the required revision set +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the deployment Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Apr 29 18:44:52.370: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-6552 424fa608-7f9d-4b4c-8eef-f01002b66940 727595 3 2022-04-29 18:44:50 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2022-04-29 18:44:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:44:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002645bd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-29 18:44:51 +0000 UTC,LastTransitionTime:2022-04-29 18:44:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2022-04-29 18:44:51 +0000 UTC,LastTransitionTime:2022-04-29 18:44:50 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Apr 29 18:44:52.377: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-6552 e67b8e5c-9bc4-4ee7-bf03-4ed45bc457e3 727597 2 2022-04-29 18:44:50 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 424fa608-7f9d-4b4c-8eef-f01002b66940 0xc003844007 0xc003844008}] [] [{kube-controller-manager Update apps/v1 2022-04-29 18:44:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"424fa608-7f9d-4b4c-8eef-f01002b66940\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:44:51 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0038440a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Apr 29 18:44:52.384: INFO: Pod "test-new-deployment-847dcfb7fb-4sv8t" is available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-4sv8t test-new-deployment-847dcfb7fb- deployment-6552 bdda7bc0-c722-49dd-9519-7eb187015188 727583 0 2022-04-29 18:44:50 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb e67b8e5c-9bc4-4ee7-bf03-4ed45bc457e3 0xc003844477 0xc003844478}] [] [{kube-controller-manager Update v1 2022-04-29 18:44:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e67b8e5c-9bc4-4ee7-bf03-4ed45bc457e3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:44:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.174\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-65k7w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65k7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:44:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:44:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:44:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:44:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.174,StartTime:2022-04-29 18:44:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:44:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://9cbac142c9c2ee7d2ee1c62aec59e61e15783c13201d6e3f8e91fc809cb9ffe0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 18:44:52.385: INFO: Pod "test-new-deployment-847dcfb7fb-d7d6d" is not available: +&Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-d7d6d test-new-deployment-847dcfb7fb- deployment-6552 cad04f8a-ebba-4a82-b137-e948aa046c8e 727600 0 2022-04-29 18:44:52 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb e67b8e5c-9bc4-4ee7-bf03-4ed45bc457e3 0xc003844657 0xc003844658}] [] [{kube-controller-manager Update v1 2022-04-29 18:44:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e67b8e5c-9bc4-4ee7-bf03-4ed45bc457e3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:44:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qww9n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qww9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:44:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:44:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:44:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:44:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:,StartTime:2022-04-29 18:44:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:52.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-6552" for this suite. +•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":95,"skipped":1703,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:52.405: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 18:44:53.045: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Apr 29 18:44:55.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854693, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854693, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854693, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854693, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 18:44:58.076: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:58.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-332" for this suite. +STEP: Destroying namespace "webhook-332-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:5.924 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":96,"skipped":1714,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:58.335: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pdb +STEP: Waiting for the pdb to be processed +STEP: updating the pdb +STEP: Waiting for the pdb to be processed +STEP: patching the pdb +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be deleted +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:44:58.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-6594" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":97,"skipped":1761,"failed":0} +SS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:44:58.440: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod with failed condition +STEP: updating the pod +Apr 29 18:46:59.028: INFO: Successfully updated pod "var-expansion-e6d973d6-ca62-462e-9b43-0435229fb649" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Apr 29 18:47:01.044: INFO: Deleting pod "var-expansion-e6d973d6-ca62-462e-9b43-0435229fb649" in namespace "var-expansion-4073" +Apr 29 18:47:01.054: INFO: Wait up to 5m0s for pod "var-expansion-e6d973d6-ca62-462e-9b43-0435229fb649" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:47:33.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4073" for this suite. + +• [SLOW TEST:154.637 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":98,"skipped":1763,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:47:33.081: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-eb086b1f-a7d2-4a50-8497-10e71241c75b +STEP: Creating the pod +Apr 29 18:47:33.150: INFO: The status of Pod pod-projected-configmaps-09aba528-2b1b-4fa8-b70c-8c74ee73f50c is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:47:35.156: INFO: The status of Pod pod-projected-configmaps-09aba528-2b1b-4fa8-b70c-8c74ee73f50c is Running (Ready = true) +STEP: Updating configmap projected-configmap-test-upd-eb086b1f-a7d2-4a50-8497-10e71241c75b +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:47:37.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6749" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":99,"skipped":1792,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:47:37.216: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:47:37.265: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Apr 29 18:47:37.281: INFO: Pod name sample-pod: Found 0 pods out of 1 +Apr 29 18:47:42.286: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Apr 29 18:47:42.286: INFO: Creating deployment "test-rolling-update-deployment" +Apr 29 18:47:42.291: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Apr 29 18:47:42.301: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Apr 29 18:47:44.312: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Apr 29 18:47:44.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854862, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854862, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854862, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786854862, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 18:47:46.322: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Apr 29 18:47:46.338: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7728 f5612218-5f4d-4e60-9da5-4bee3605bbd8 729091 1 2022-04-29 18:47:42 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-04-29 18:47:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:47:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003738658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-29 18:47:42 +0000 UTC,LastTransitionTime:2022-04-29 18:47:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2022-04-29 18:47:44 +0000 UTC,LastTransitionTime:2022-04-29 18:47:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Apr 29 18:47:46.343: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-7728 d18246e1-aaea-4643-9855-23365af01d11 729081 1 2022-04-29 18:47:42 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment f5612218-5f4d-4e60-9da5-4bee3605bbd8 0xc003738b57 0xc003738b58}] [] [{kube-controller-manager Update apps/v1 2022-04-29 18:47:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f5612218-5f4d-4e60-9da5-4bee3605bbd8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:47:44 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003738c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Apr 29 18:47:46.343: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Apr 29 18:47:46.343: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7728 37860eb1-6324-49b1-a0a0-1663a7cb941b 729090 2 2022-04-29 18:47:37 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment f5612218-5f4d-4e60-9da5-4bee3605bbd8 0xc003738a27 0xc003738a28}] [] [{e2e.test Update apps/v1 2022-04-29 18:47:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:47:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f5612218-5f4d-4e60-9da5-4bee3605bbd8\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-04-29 18:47:44 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003738ae8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Apr 29 18:47:46.348: INFO: Pod "test-rolling-update-deployment-585b757574-nkdfj" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-nkdfj test-rolling-update-deployment-585b757574- deployment-7728 6165295d-3297-4786-84d4-234dcb828c75 729080 0 2022-04-29 18:47:42 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 d18246e1-aaea-4643-9855-23365af01d11 0xc003739067 0xc003739068}] [] [{kube-controller-manager Update v1 2022-04-29 18:47:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d18246e1-aaea-4643-9855-23365af01d11\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 18:47:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.179\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6lkh7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6lkh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:47:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:47:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:47:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 18:47:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.179,StartTime:2022-04-29 18:47:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 18:47:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://0e03e4bd15d85f0ef64eeadd13deb568652383534d87f8238707c42718a5b628,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:47:46.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7728" for this suite. + +• [SLOW TEST:9.146 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":100,"skipped":1817,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:47:46.362: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-c966d2e6-b588-4e63-bedd-a7cb065fc6d7 +STEP: Creating a pod to test consume secrets +Apr 29 18:47:46.408: INFO: Waiting up to 5m0s for pod "pod-secrets-bc006e6e-c59c-4bff-a53e-61d67235d01f" in namespace "secrets-8986" to be "Succeeded or Failed" +Apr 29 18:47:46.410: INFO: Pod "pod-secrets-bc006e6e-c59c-4bff-a53e-61d67235d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.41337ms +Apr 29 18:47:48.416: INFO: Pod "pod-secrets-bc006e6e-c59c-4bff-a53e-61d67235d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007822669s +Apr 29 18:47:50.421: INFO: Pod "pod-secrets-bc006e6e-c59c-4bff-a53e-61d67235d01f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013742767s +STEP: Saw pod success +Apr 29 18:47:50.422: INFO: Pod "pod-secrets-bc006e6e-c59c-4bff-a53e-61d67235d01f" satisfied condition "Succeeded or Failed" +Apr 29 18:47:50.426: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-secrets-bc006e6e-c59c-4bff-a53e-61d67235d01f container secret-volume-test: +STEP: delete the pod +Apr 29 18:47:50.453: INFO: Waiting for pod pod-secrets-bc006e6e-c59c-4bff-a53e-61d67235d01f to disappear +Apr 29 18:47:50.464: INFO: Pod pod-secrets-bc006e6e-c59c-4bff-a53e-61d67235d01f no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:47:50.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-8986" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":101,"skipped":1825,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:47:50.496: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Apr 29 18:47:50.590: INFO: The status of Pod annotationupdate5d1c343e-407d-4a75-877a-df8e8f471a56 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:47:52.597: INFO: The status of Pod annotationupdate5d1c343e-407d-4a75-877a-df8e8f471a56 is Running (Ready = true) +Apr 29 18:47:53.121: INFO: Successfully updated pod "annotationupdate5d1c343e-407d-4a75-877a-df8e8f471a56" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:47:55.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3578" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":102,"skipped":1831,"failed":0} +S +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:47:55.151: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:47:55.249: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Apr 29 18:47:55.261: INFO: Number of nodes with available pods: 0 +Apr 29 18:47:55.261: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Change node label to blue, check that daemon pod is launched. +Apr 29 18:47:55.284: INFO: Number of nodes with available pods: 0 +Apr 29 18:47:55.284: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:47:56.289: INFO: Number of nodes with available pods: 0 +Apr 29 18:47:56.289: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:47:57.291: INFO: Number of nodes with available pods: 1 +Apr 29 18:47:57.291: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update the node label to green, and wait for daemons to be unscheduled +Apr 29 18:47:57.321: INFO: Number of nodes with available pods: 1 +Apr 29 18:47:57.321: INFO: Number of running nodes: 0, number of available pods: 1 +Apr 29 18:47:58.326: INFO: Number of nodes with available pods: 0 +Apr 29 18:47:58.326: INFO: Number of running nodes: 0, number of available pods: 0 +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Apr 29 18:47:58.337: INFO: Number of nodes with available pods: 0 +Apr 29 18:47:58.337: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:47:59.343: INFO: Number of nodes with available pods: 0 +Apr 29 18:47:59.343: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:48:00.347: INFO: Number of nodes with available pods: 0 +Apr 29 18:48:00.347: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:48:01.344: INFO: Number of nodes with available pods: 0 +Apr 29 18:48:01.344: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 18:48:02.344: INFO: Number of nodes with available pods: 1 +Apr 29 18:48:02.344: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6664, will wait for the garbage collector to delete the pods +Apr 29 18:48:02.414: INFO: Deleting DaemonSet.extensions daemon-set took: 6.458632ms +Apr 29 18:48:02.516: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.524039ms +Apr 29 18:48:05.422: INFO: Number of nodes with available pods: 0 +Apr 29 18:48:05.422: INFO: Number of running nodes: 0, number of available pods: 0 +Apr 29 18:48:05.426: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"729372"},"items":null} + +Apr 29 18:48:05.429: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"729372"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:48:05.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6664" for this suite. + +• [SLOW TEST:10.319 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":103,"skipped":1832,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:48:05.470: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Apr 29 18:48:05.513: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:48:09.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-3364" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":104,"skipped":1840,"failed":0} +SSSSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:48:09.581: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +Apr 29 18:48:09.626: INFO: created test-event-1 +Apr 29 18:48:09.631: INFO: created test-event-2 +Apr 29 18:48:09.637: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Apr 29 18:48:09.642: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Apr 29 18:48:09.656: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:48:09.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-901" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":105,"skipped":1845,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:48:09.671: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating pod +Apr 29 18:48:09.721: INFO: The status of Pod pod-hostip-d804e005-3df8-4a4a-83ab-a90fba550275 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:48:11.728: INFO: The status of Pod pod-hostip-d804e005-3df8-4a4a-83ab-a90fba550275 is Running (Ready = true) +Apr 29 18:48:11.735: INFO: Pod pod-hostip-d804e005-3df8-4a4a-83ab-a90fba550275 has hostIP: 10.180.99.66 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:48:11.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-4414" for this suite. +•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":106,"skipped":1859,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:48:11.750: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-266 +STEP: creating service affinity-clusterip in namespace services-266 +STEP: creating replication controller affinity-clusterip in namespace services-266 +I0429 18:48:11.816804 25 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-266, replica count: 3 +I0429 18:48:14.868565 25 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 18:48:14.881: INFO: Creating new exec pod +Apr 29 18:48:17.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-266 exec execpod-affinityzlsnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Apr 29 18:48:18.288: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Apr 29 18:48:18.288: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 18:48:18.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-266 exec execpod-affinityzlsnl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.66.130.228 80' +Apr 29 18:48:18.469: INFO: stderr: "+ nc -v+ -t -w 2 100.66.130.228echo 80\n hostName\nConnection to 100.66.130.228 80 port [tcp/http] succeeded!\n" +Apr 29 18:48:18.469: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 18:48:18.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-266 exec execpod-affinityzlsnl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.66.130.228:80/ ; done' +Apr 29 18:48:18.731: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.130.228:80/\n" +Apr 29 18:48:18.731: INFO: stdout: "\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg\naffinity-clusterip-x9rhg" +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Received response from host: affinity-clusterip-x9rhg +Apr 29 18:48:18.731: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-266, will wait for the garbage collector to delete the pods +Apr 29 18:48:18.802: INFO: Deleting ReplicationController affinity-clusterip took: 6.359786ms +Apr 29 18:48:18.903: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.772824ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:48:21.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-266" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:9.784 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":107,"skipped":1868,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:48:21.538: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Apr 29 18:48:21.579: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:49:03.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-3647" for this suite. + +• [SLOW TEST:41.607 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":108,"skipped":1921,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:49:03.146: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-460 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating stateful set ss in namespace statefulset-460 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-460 +Apr 29 18:49:03.241: INFO: Found 0 stateful pods, waiting for 1 +Apr 29 18:49:13.247: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Apr 29 18:49:13.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-460 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 18:49:13.572: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 18:49:13.572: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 18:49:13.572: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Apr 29 18:49:13.577: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Apr 29 18:49:23.585: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Apr 29 18:49:23.585: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 18:49:23.604: INFO: POD NODE PHASE GRACE CONDITIONS +Apr 29 18:49:23.604: INFO: ss-0 tkg-mgmt-vc-md-0-59d8b7c778-msxpc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:03 +0000 UTC }] +Apr 29 18:49:23.604: INFO: +Apr 29 18:49:23.604: INFO: StatefulSet ss has not reached scale 3, at 1 +Apr 29 18:49:24.678: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994032785s +Apr 29 18:49:25.682: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.920928833s +Apr 29 18:49:26.687: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.916801873s +Apr 29 18:49:27.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.911856121s +Apr 29 18:49:28.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.905655562s +Apr 29 18:49:29.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.900427375s +Apr 29 18:49:30.709: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.895363534s +Apr 29 18:49:31.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.89010319s +Apr 29 18:49:32.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 885.341344ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-460 +Apr 29 18:49:33.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-460 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Apr 29 18:49:33.893: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Apr 29 18:49:33.893: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Apr 29 18:49:33.893: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Apr 29 18:49:33.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-460 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Apr 29 18:49:34.085: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Apr 29 18:49:34.085: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Apr 29 18:49:34.085: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Apr 29 18:49:34.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-460 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Apr 29 18:49:34.252: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Apr 29 18:49:34.252: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Apr 29 18:49:34.252: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Apr 29 18:49:34.256: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false +Apr 29 18:49:44.263: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 18:49:44.263: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 18:49:44.263: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Apr 29 18:49:44.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-460 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 18:49:44.434: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 18:49:44.434: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 18:49:44.434: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Apr 29 18:49:44.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-460 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 18:49:44.606: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 18:49:44.606: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 18:49:44.606: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Apr 29 18:49:44.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-460 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 18:49:44.823: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 18:49:44.823: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 18:49:44.823: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Apr 29 18:49:44.823: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 18:49:44.827: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Apr 29 18:49:54.837: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Apr 29 18:49:54.837: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Apr 29 18:49:54.837: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Apr 29 18:49:54.851: INFO: POD NODE PHASE GRACE CONDITIONS +Apr 29 18:49:54.851: INFO: ss-0 tkg-mgmt-vc-md-0-59d8b7c778-msxpc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:03 +0000 UTC }] +Apr 29 18:49:54.851: INFO: ss-1 tkg-mgmt-vc-control-plane-4czbf Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:23 +0000 UTC }] +Apr 29 18:49:54.851: INFO: ss-2 tkg-mgmt-vc-md-0-59d8b7c778-msxpc Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:23 +0000 UTC }] +Apr 29 18:49:54.851: INFO: +Apr 29 18:49:54.851: INFO: StatefulSet ss has not reached scale 0, at 3 +Apr 29 18:49:55.856: INFO: POD NODE PHASE GRACE CONDITIONS +Apr 29 18:49:55.856: INFO: ss-0 tkg-mgmt-vc-md-0-59d8b7c778-msxpc Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:03 +0000 UTC }] +Apr 29 18:49:55.856: INFO: ss-1 tkg-mgmt-vc-control-plane-4czbf Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:23 +0000 UTC }] +Apr 29 18:49:55.856: INFO: ss-2 tkg-mgmt-vc-md-0-59d8b7c778-msxpc Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 18:49:23 +0000 UTC }] +Apr 29 18:49:55.856: INFO: +Apr 29 18:49:55.856: INFO: StatefulSet ss has not reached scale 0, at 3 +Apr 29 18:49:56.861: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.98971713s +Apr 29 18:49:57.867: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.985362877s +Apr 29 18:49:58.872: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.978447824s +Apr 29 18:49:59.876: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.974243274s +Apr 29 18:50:00.881: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.969718291s +Apr 29 18:50:01.890: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.961656669s +Apr 29 18:50:02.896: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.955543603s +Apr 29 18:50:03.900: INFO: Verifying statefulset ss doesn't scale past 0 for another 950.381399ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-460 +Apr 29 18:50:04.905: INFO: Scaling statefulset ss to 0 +Apr 29 18:50:04.917: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Apr 29 18:50:04.921: INFO: Deleting all statefulset in ns statefulset-460 +Apr 29 18:50:04.924: INFO: Scaling statefulset ss to 0 +Apr 29 18:50:04.935: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 18:50:04.937: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:50:04.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-460" for this suite. + +• [SLOW TEST:61.825 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":109,"skipped":1939,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:50:04.972: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:51:05.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-6708" for this suite. + +• [SLOW TEST:60.059 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":110,"skipped":1959,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:51:05.031: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-c32ee109-beed-4476-8307-a284e81fb636 in namespace container-probe-5578 +Apr 29 18:51:09.082: INFO: Started pod liveness-c32ee109-beed-4476-8307-a284e81fb636 in namespace container-probe-5578 +STEP: checking the pod's current state and verifying that restartCount is present +Apr 29 18:51:09.087: INFO: Initial restart count of pod liveness-c32ee109-beed-4476-8307-a284e81fb636 is 0 +Apr 29 18:51:27.158: INFO: Restart count of pod container-probe-5578/liveness-c32ee109-beed-4476-8307-a284e81fb636 is now 1 (18.07090926s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:51:27.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5578" for this suite. + +• [SLOW TEST:22.155 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":111,"skipped":1981,"failed":0} +[sig-apps] CronJob + should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:51:27.189: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a cronjob +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Apr 29 18:51:27.267: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Apr 29 18:51:27.274: INFO: starting watch +STEP: patching +STEP: updating +Apr 29 18:51:27.297: INFO: waiting for watch events with expected annotations +Apr 29 18:51:27.298: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:51:27.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-2324" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":112,"skipped":1981,"failed":0} +SS +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:51:27.371: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:51:27.426: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b0756eed-6392-4305-a57e-ce0ca6ad69cd" in namespace "security-context-test-4254" to be "Succeeded or Failed" +Apr 29 18:51:27.437: INFO: Pod "busybox-privileged-false-b0756eed-6392-4305-a57e-ce0ca6ad69cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.325401ms +Apr 29 18:51:29.445: INFO: Pod "busybox-privileged-false-b0756eed-6392-4305-a57e-ce0ca6ad69cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018139625s +Apr 29 18:51:29.445: INFO: Pod "busybox-privileged-false-b0756eed-6392-4305-a57e-ce0ca6ad69cd" satisfied condition "Succeeded or Failed" +Apr 29 18:51:29.463: INFO: Got logs for pod "busybox-privileged-false-b0756eed-6392-4305-a57e-ce0ca6ad69cd": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:51:29.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-4254" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":113,"skipped":1983,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:51:29.484: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Apr 29 18:51:29.522: INFO: Waiting up to 1m0s for all nodes to be ready +Apr 29 18:52:29.584: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:52:29.590: INFO: Starting informer... +STEP: Starting pods... +Apr 29 18:52:29.812: INFO: Pod1 is running on tkg-mgmt-vc-md-0-59d8b7c778-msxpc. Tainting Node +Apr 29 18:52:32.050: INFO: Pod2 is running on tkg-mgmt-vc-md-0-59d8b7c778-msxpc. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Apr 29 18:52:38.459: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Apr 29 18:52:58.515: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:52:58.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-247" for this suite. + +• [SLOW TEST:89.063 seconds] +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":114,"skipped":2001,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:52:58.548: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 18:52:58.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-828f7203-bff1-4ec2-8f0b-3c089a29204e" in namespace "projected-2562" to be "Succeeded or Failed" +Apr 29 18:52:58.601: INFO: Pod "downwardapi-volume-828f7203-bff1-4ec2-8f0b-3c089a29204e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312158ms +Apr 29 18:53:00.608: INFO: Pod "downwardapi-volume-828f7203-bff1-4ec2-8f0b-3c089a29204e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013182817s +STEP: Saw pod success +Apr 29 18:53:00.608: INFO: Pod "downwardapi-volume-828f7203-bff1-4ec2-8f0b-3c089a29204e" satisfied condition "Succeeded or Failed" +Apr 29 18:53:00.612: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-828f7203-bff1-4ec2-8f0b-3c089a29204e container client-container: +STEP: delete the pod +Apr 29 18:53:00.643: INFO: Waiting for pod downwardapi-volume-828f7203-bff1-4ec2-8f0b-3c089a29204e to disappear +Apr 29 18:53:00.647: INFO: Pod downwardapi-volume-828f7203-bff1-4ec2-8f0b-3c089a29204e no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:53:00.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2562" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":115,"skipped":2007,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:53:00.659: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: referencing a single matching pod +STEP: referencing matching pods with named port +STEP: creating empty Endpoints and EndpointSlices for no matching Pods +STEP: recreating EndpointSlices after they've been deleted +Apr 29 18:53:20.834: INFO: EndpointSlice for Service endpointslice-4954/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:53:30.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-4954" for this suite. + +• [SLOW TEST:30.200 seconds] +[sig-network] EndpointSlice +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":116,"skipped":2019,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:53:30.859: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-487 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-487 +STEP: creating replication controller externalsvc in namespace services-487 +I0429 18:53:30.926394 25 runners.go:190] Created replication controller with name: externalsvc, namespace: services-487, replica count: 2 +I0429 18:53:33.978126 25 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Apr 29 18:53:34.001: INFO: Creating new exec pod +Apr 29 18:53:36.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-487 exec execpodfjg59 -- /bin/sh -x -c nslookup clusterip-service.services-487.svc.cluster.local' +Apr 29 18:53:36.848: INFO: stderr: "+ nslookup clusterip-service.services-487.svc.cluster.local\n" +Apr 29 18:53:36.848: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nclusterip-service.services-487.svc.cluster.local\tcanonical name = externalsvc.services-487.svc.cluster.local.\nName:\texternalsvc.services-487.svc.cluster.local\nAddress: 100.66.233.2\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-487, will wait for the garbage collector to delete the pods +Apr 29 18:53:36.910: INFO: Deleting ReplicationController externalsvc took: 6.285731ms +Apr 29 18:53:37.011: INFO: Terminating ReplicationController externalsvc pods took: 100.892295ms +Apr 29 18:53:38.827: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:53:38.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-487" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:7.988 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":117,"skipped":2050,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:53:38.848: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-995d3f12-39bf-4c1b-98dc-687d3271db1d +STEP: Creating a pod to test consume configMaps +Apr 29 18:53:38.899: INFO: Waiting up to 5m0s for pod "pod-configmaps-31499173-1495-4d69-b947-b89ff28bd0a9" in namespace "configmap-5918" to be "Succeeded or Failed" +Apr 29 18:53:38.905: INFO: Pod "pod-configmaps-31499173-1495-4d69-b947-b89ff28bd0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.939374ms +Apr 29 18:53:40.912: INFO: Pod "pod-configmaps-31499173-1495-4d69-b947-b89ff28bd0a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012988085s +STEP: Saw pod success +Apr 29 18:53:40.913: INFO: Pod "pod-configmaps-31499173-1495-4d69-b947-b89ff28bd0a9" satisfied condition "Succeeded or Failed" +Apr 29 18:53:40.916: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-31499173-1495-4d69-b947-b89ff28bd0a9 container agnhost-container: +STEP: delete the pod +Apr 29 18:53:40.933: INFO: Waiting for pod pod-configmaps-31499173-1495-4d69-b947-b89ff28bd0a9 to disappear +Apr 29 18:53:40.936: INFO: Pod pod-configmaps-31499173-1495-4d69-b947-b89ff28bd0a9 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:53:40.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5918" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":118,"skipped":2067,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:53:40.952: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-6084 +Apr 29 18:53:41.015: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:53:43.023: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Apr 29 18:53:43.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6084 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Apr 29 18:53:43.222: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Apr 29 18:53:43.222: INFO: stdout: "iptables" +Apr 29 18:53:43.222: INFO: proxyMode: iptables +Apr 29 18:53:43.235: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Apr 29 18:53:43.238: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-6084 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-6084 +I0429 18:53:43.259507 25 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-6084, replica count: 3 +I0429 18:53:46.310543 25 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 18:53:46.317: INFO: Creating new exec pod +Apr 29 18:53:49.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6084 exec execpod-affinityd4lbk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Apr 29 18:53:49.535: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Apr 29 18:53:49.535: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 18:53:49.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6084 exec execpod-affinityd4lbk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.64.197.108 80' +Apr 29 18:53:49.699: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.64.197.108 80\nConnection to 100.64.197.108 80 port [tcp/http] succeeded!\n" +Apr 29 18:53:49.699: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 18:53:49.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6084 exec execpod-affinityd4lbk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.64.197.108:80/ ; done' +Apr 29 18:53:49.960: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n" +Apr 29 18:53:49.960: INFO: stdout: "\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5\naffinity-clusterip-timeout-x49f5" +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Received response from host: affinity-clusterip-timeout-x49f5 +Apr 29 18:53:49.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6084 exec execpod-affinityd4lbk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.64.197.108:80/' +Apr 29 18:53:50.135: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n" +Apr 29 18:53:50.135: INFO: stdout: "affinity-clusterip-timeout-x49f5" +Apr 29 18:54:10.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6084 exec execpod-affinityd4lbk -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://100.64.197.108:80/' +Apr 29 18:54:10.339: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://100.64.197.108:80/\n" +Apr 29 18:54:10.339: INFO: stdout: "affinity-clusterip-timeout-cpqgk" +Apr 29 18:54:10.340: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-6084, will wait for the garbage collector to delete the pods +Apr 29 18:54:10.414: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 7.84915ms +Apr 29 18:54:10.515: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.915382ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:54:13.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6084" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:32.105 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":119,"skipped":2107,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:54:13.061: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override all +Apr 29 18:54:13.122: INFO: Waiting up to 5m0s for pod "client-containers-4454fa5f-ea75-40a3-9723-84eabdd7080a" in namespace "containers-6151" to be "Succeeded or Failed" +Apr 29 18:54:13.128: INFO: Pod "client-containers-4454fa5f-ea75-40a3-9723-84eabdd7080a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276067ms +Apr 29 18:54:15.137: INFO: Pod "client-containers-4454fa5f-ea75-40a3-9723-84eabdd7080a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015280244s +STEP: Saw pod success +Apr 29 18:54:15.137: INFO: Pod "client-containers-4454fa5f-ea75-40a3-9723-84eabdd7080a" satisfied condition "Succeeded or Failed" +Apr 29 18:54:15.142: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod client-containers-4454fa5f-ea75-40a3-9723-84eabdd7080a container agnhost-container: +STEP: delete the pod +Apr 29 18:54:15.160: INFO: Waiting for pod client-containers-4454fa5f-ea75-40a3-9723-84eabdd7080a to disappear +Apr 29 18:54:15.164: INFO: Pod client-containers-4454fa5f-ea75-40a3-9723-84eabdd7080a no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:54:15.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-6151" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":120,"skipped":2118,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:54:15.182: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename server-version +STEP: Waiting for a default service account to be provisioned in namespace +[It] should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Request ServerVersion +STEP: Confirm major version +Apr 29 18:54:15.229: INFO: Major version: 1 +STEP: Confirm minor version +Apr 29 18:54:15.229: INFO: cleanMinorVersion: 22 +Apr 29 18:54:15.229: INFO: Minor version: 22 +[AfterEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:54:15.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-3772" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":121,"skipped":2135,"failed":0} + +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:54:15.242: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 18:54:15.308: INFO: Pod name sample-pod: Found 0 pods out of 1 +Apr 29 18:54:20.313: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Scaling up "test-rs" replicaset +Apr 29 18:54:20.327: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet +Apr 29 18:54:20.341: INFO: observed ReplicaSet test-rs in namespace replicaset-8281 with ReadyReplicas 1, AvailableReplicas 1 +Apr 29 18:54:20.366: INFO: observed ReplicaSet test-rs in namespace replicaset-8281 with ReadyReplicas 1, AvailableReplicas 1 +Apr 29 18:54:20.393: INFO: observed ReplicaSet test-rs in namespace replicaset-8281 with ReadyReplicas 1, AvailableReplicas 1 +Apr 29 18:54:20.411: INFO: observed ReplicaSet test-rs in namespace replicaset-8281 with ReadyReplicas 1, AvailableReplicas 1 +Apr 29 18:54:21.838: INFO: observed ReplicaSet test-rs in namespace replicaset-8281 with ReadyReplicas 2, AvailableReplicas 2 +Apr 29 18:54:21.965: INFO: observed Replicaset test-rs in namespace replicaset-8281 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:54:21.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-8281" for this suite. + +• [SLOW TEST:6.736 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":122,"skipped":2135,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:54:21.981: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod busybox-725d864a-a70a-4791-83c1-6623c6a96262 in namespace container-probe-4965 +Apr 29 18:54:24.041: INFO: Started pod busybox-725d864a-a70a-4791-83c1-6623c6a96262 in namespace container-probe-4965 +STEP: checking the pod's current state and verifying that restartCount is present +Apr 29 18:54:24.046: INFO: Initial restart count of pod busybox-725d864a-a70a-4791-83c1-6623c6a96262 is 0 +Apr 29 18:55:14.200: INFO: Restart count of pod container-probe-4965/busybox-725d864a-a70a-4791-83c1-6623c6a96262 is now 1 (50.154347896s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:55:14.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-4965" for this suite. + +• [SLOW TEST:52.245 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":123,"skipped":2191,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:55:14.227: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 18:55:15.051: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 18:55:18.074: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:55:18.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1136" for this suite. +STEP: Destroying namespace "webhook-1136-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":124,"skipped":2198,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:55:18.225: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-6363 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Apr 29 18:55:18.289: INFO: Found 0 stateful pods, waiting for 3 +Apr 29 18:55:28.297: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 18:55:28.297: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 18:55:28.297: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 18:55:28.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-6363 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 18:55:28.519: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 18:55:28.519: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 18:55:28.519: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Apr 29 18:55:38.566: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Apr 29 18:55:48.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-6363 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Apr 29 18:55:48.771: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Apr 29 18:55:48.771: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Apr 29 18:55:48.771: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +STEP: Rolling back to a previous revision +Apr 29 18:55:58.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-6363 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 18:55:59.072: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 18:55:59.072: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 18:55:59.072: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Apr 29 18:56:09.131: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Apr 29 18:56:19.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-6363 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Apr 29 18:56:19.376: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Apr 29 18:56:19.376: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Apr 29 18:56:19.376: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Apr 29 18:56:29.441: INFO: Deleting all statefulset in ns statefulset-6363 +Apr 29 18:56:29.448: INFO: Scaling statefulset ss2 to 0 +Apr 29 18:56:39.478: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 18:56:39.482: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:56:39.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-6363" for this suite. + +• [SLOW TEST:81.313 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":125,"skipped":2241,"failed":0} +SSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:56:39.539: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-5409 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Apr 29 18:56:39.589: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Apr 29 18:56:39.636: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:56:41.643: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 18:56:43.645: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 18:56:45.643: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 18:56:47.643: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 18:56:49.643: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 18:56:51.644: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 18:56:53.642: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 18:56:58.851: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 18:56:59.642: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 18:57:01.643: INFO: The status of Pod netserver-0 is Running (Ready = true) +Apr 29 18:57:01.651: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Apr 29 18:57:03.694: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Apr 29 18:57:03.694: INFO: Going to poll 100.96.0.122 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Apr 29 18:57:03.699: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.0.122 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5409 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 18:57:03.700: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 18:57:04.835: INFO: Found all 1 expected endpoints: [netserver-0] +Apr 29 18:57:04.835: INFO: Going to poll 100.96.1.214 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Apr 29 18:57:04.840: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 100.96.1.214 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5409 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 18:57:04.840: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 18:57:05.956: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 18:57:05.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-5409" for this suite. + +• [SLOW TEST:26.430 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":126,"skipped":2247,"failed":0} +S +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 18:57:05.970: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod test-webserver-f02d3866-5711-4a09-8894-2ccefdd264d9 in namespace container-probe-2996 +Apr 29 18:57:08.028: INFO: Started pod test-webserver-f02d3866-5711-4a09-8894-2ccefdd264d9 in namespace container-probe-2996 +STEP: checking the pod's current state and verifying that restartCount is present +Apr 29 18:57:08.033: INFO: Initial restart count of pod test-webserver-f02d3866-5711-4a09-8894-2ccefdd264d9 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:01:09.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2996" for this suite. + +• [SLOW TEST:243.896 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":127,"skipped":2248,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:01:09.869: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-babd6f93-871c-404c-90f6-4cf1e5f147af +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:01:13.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2300" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":128,"skipped":2261,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:01:14.015: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override arguments +Apr 29 19:01:14.075: INFO: Waiting up to 5m0s for pod "client-containers-05fb6a95-dc9e-4885-b0e2-d5f5c19a0171" in namespace "containers-3135" to be "Succeeded or Failed" +Apr 29 19:01:14.080: INFO: Pod "client-containers-05fb6a95-dc9e-4885-b0e2-d5f5c19a0171": Phase="Pending", Reason="", readiness=false. Elapsed: 4.776769ms +Apr 29 19:01:16.084: INFO: Pod "client-containers-05fb6a95-dc9e-4885-b0e2-d5f5c19a0171": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009221836s +Apr 29 19:01:18.090: INFO: Pod "client-containers-05fb6a95-dc9e-4885-b0e2-d5f5c19a0171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01489179s +STEP: Saw pod success +Apr 29 19:01:18.090: INFO: Pod "client-containers-05fb6a95-dc9e-4885-b0e2-d5f5c19a0171" satisfied condition "Succeeded or Failed" +Apr 29 19:01:18.094: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod client-containers-05fb6a95-dc9e-4885-b0e2-d5f5c19a0171 container agnhost-container: +STEP: delete the pod +Apr 29 19:01:18.118: INFO: Waiting for pod client-containers-05fb6a95-dc9e-4885-b0e2-d5f5c19a0171 to disappear +Apr 29 19:01:18.122: INFO: Pod client-containers-05fb6a95-dc9e-4885-b0e2-d5f5c19a0171 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:01:18.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-3135" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":129,"skipped":2285,"failed":0} +SS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:01:18.135: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:01:18.550: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:01:21.589: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:01:21.595: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4104-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:01:24.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-279" for this suite. +STEP: Destroying namespace "webhook-279-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:6.737 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":130,"skipped":2287,"failed":0} +SS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:01:24.872: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-8025 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Apr 29 19:01:24.908: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Apr 29 19:01:24.948: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:01:26.954: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:01:28.954: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:01:30.955: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:01:32.955: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:01:34.955: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:01:36.955: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:01:38.954: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:01:40.955: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:01:42.955: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:01:44.954: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:01:46.955: INFO: The status of Pod netserver-0 is Running (Ready = true) +Apr 29 19:01:46.965: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Apr 29 19:01:50.988: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Apr 29 19:01:50.988: INFO: Breadth first check of 100.96.0.123 on host 10.180.111.35... +Apr 29 19:01:50.994: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.221:9080/dial?request=hostname&protocol=http&host=100.96.0.123&port=8083&tries=1'] Namespace:pod-network-test-8025 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:01:50.995: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:01:51.156: INFO: Waiting for responses: map[] +Apr 29 19:01:51.156: INFO: reached 100.96.0.123 after 0/1 tries +Apr 29 19:01:51.156: INFO: Breadth first check of 100.96.1.220 on host 10.180.99.66... +Apr 29 19:01:51.161: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.221:9080/dial?request=hostname&protocol=http&host=100.96.1.220&port=8083&tries=1'] Namespace:pod-network-test-8025 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:01:51.161: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:01:51.263: INFO: Waiting for responses: map[] +Apr 29 19:01:51.264: INFO: reached 100.96.1.220 after 0/1 tries +Apr 29 19:01:51.264: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:01:51.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-8025" for this suite. + +• [SLOW TEST:26.406 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":131,"skipped":2289,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:01:51.278: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-2f8a3733-e23e-493f-9398-141445e7eed9 +STEP: Creating a pod to test consume configMaps +Apr 29 19:01:51.329: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1860bf94-9c7b-4aa4-aa55-b48070fe6d98" in namespace "projected-2496" to be "Succeeded or Failed" +Apr 29 19:01:51.333: INFO: Pod "pod-projected-configmaps-1860bf94-9c7b-4aa4-aa55-b48070fe6d98": Phase="Pending", Reason="", readiness=false. Elapsed: 3.55875ms +Apr 29 19:01:53.342: INFO: Pod "pod-projected-configmaps-1860bf94-9c7b-4aa4-aa55-b48070fe6d98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012495828s +STEP: Saw pod success +Apr 29 19:01:53.342: INFO: Pod "pod-projected-configmaps-1860bf94-9c7b-4aa4-aa55-b48070fe6d98" satisfied condition "Succeeded or Failed" +Apr 29 19:01:53.346: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-configmaps-1860bf94-9c7b-4aa4-aa55-b48070fe6d98 container agnhost-container: +STEP: delete the pod +Apr 29 19:01:53.379: INFO: Waiting for pod pod-projected-configmaps-1860bf94-9c7b-4aa4-aa55-b48070fe6d98 to disappear +Apr 29 19:01:53.384: INFO: Pod pod-projected-configmaps-1860bf94-9c7b-4aa4-aa55-b48070fe6d98 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:01:53.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2496" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":132,"skipped":2319,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:01:53.400: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:01:53.445: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:01:56.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-1317" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":346,"completed":133,"skipped":2338,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:01:56.794: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:01:56.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-1802" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":134,"skipped":2363,"failed":0} +SSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:01:56.994: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:01:57.036: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Creating first CR +Apr 29 19:01:59.622: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T19:01:59Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T19:01:59Z]] name:name1 resourceVersion:737171 uid:934731b2-d280-4b41-b3f0-834941977bdc] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Apr 29 19:02:11.064: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T19:02:09Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T19:02:09Z]] name:name2 resourceVersion:737246 uid:42e53785-59bf-4692-b3ce-6afdd29e6fc9] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Apr 29 19:02:21.078: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T19:01:59Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T19:02:21Z]] name:name1 resourceVersion:737296 uid:934731b2-d280-4b41-b3f0-834941977bdc] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Apr 29 19:02:31.090: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T19:02:09Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T19:02:31Z]] name:name2 resourceVersion:737363 uid:42e53785-59bf-4692-b3ce-6afdd29e6fc9] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Apr 29 19:02:41.106: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T19:01:59Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T19:02:21Z]] name:name1 resourceVersion:737432 uid:934731b2-d280-4b41-b3f0-834941977bdc] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Apr 29 19:02:51.121: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-04-29T19:02:09Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-04-29T19:02:31Z]] name:name2 resourceVersion:737502 uid:42e53785-59bf-4692-b3ce-6afdd29e6fc9] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:03:01.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-2139" for this suite. + +• [SLOW TEST:64.661 seconds] +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + CustomResourceDefinition Watch + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":135,"skipped":2366,"failed":0} +SS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:03:01.655: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-820 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a new StatefulSet +Apr 29 19:03:01.730: INFO: Found 0 stateful pods, waiting for 3 +Apr 29 19:03:11.739: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 19:03:11.739: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 19:03:11.739: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 +Apr 29 19:03:11.779: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Apr 29 19:03:21.828: INFO: Updating stateful set ss2 +Apr 29 19:03:21.838: INFO: Waiting for Pod statefulset-820/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +STEP: Restoring Pods to the correct revision when they are deleted +Apr 29 19:03:31.902: INFO: Found 2 stateful pods, waiting for 3 +Apr 29 19:03:41.910: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 19:03:41.910: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 19:03:41.910: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Apr 29 19:03:41.944: INFO: Updating stateful set ss2 +Apr 29 19:03:41.959: INFO: Waiting for Pod statefulset-820/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +Apr 29 19:03:51.993: INFO: Updating stateful set ss2 +Apr 29 19:03:52.001: INFO: Waiting for StatefulSet statefulset-820/ss2 to complete update +Apr 29 19:03:52.001: INFO: Waiting for Pod statefulset-820/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Apr 29 19:04:02.013: INFO: Deleting all statefulset in ns statefulset-820 +Apr 29 19:04:02.019: INFO: Scaling statefulset ss2 to 0 +Apr 29 19:04:12.049: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 19:04:12.055: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:12.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-820" for this suite. + +• [SLOW TEST:70.437 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":136,"skipped":2368,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:12.096: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Apr 29 19:04:12.144: INFO: Waiting up to 5m0s for pod "pod-82cddf51-9718-418c-b1b1-23642e8cb5a2" in namespace "emptydir-8777" to be "Succeeded or Failed" +Apr 29 19:04:12.147: INFO: Pod "pod-82cddf51-9718-418c-b1b1-23642e8cb5a2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.712245ms +Apr 29 19:04:14.154: INFO: Pod "pod-82cddf51-9718-418c-b1b1-23642e8cb5a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009757345s +STEP: Saw pod success +Apr 29 19:04:14.154: INFO: Pod "pod-82cddf51-9718-418c-b1b1-23642e8cb5a2" satisfied condition "Succeeded or Failed" +Apr 29 19:04:14.157: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-82cddf51-9718-418c-b1b1-23642e8cb5a2 container test-container: +STEP: delete the pod +Apr 29 19:04:14.185: INFO: Waiting for pod pod-82cddf51-9718-418c-b1b1-23642e8cb5a2 to disappear +Apr 29 19:04:14.190: INFO: Pod pod-82cddf51-9718-418c-b1b1-23642e8cb5a2 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:14.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8777" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":137,"skipped":2422,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:14.202: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-map-affcb07d-e178-43e4-9e30-b6cd6fa82fcb +STEP: Creating a pod to test consume secrets +Apr 29 19:04:14.262: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3925d151-f5bb-46f9-842b-0f606f974431" in namespace "projected-8847" to be "Succeeded or Failed" +Apr 29 19:04:14.267: INFO: Pod "pod-projected-secrets-3925d151-f5bb-46f9-842b-0f606f974431": Phase="Pending", Reason="", readiness=false. Elapsed: 5.020507ms +Apr 29 19:04:16.273: INFO: Pod "pod-projected-secrets-3925d151-f5bb-46f9-842b-0f606f974431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010695292s +STEP: Saw pod success +Apr 29 19:04:16.273: INFO: Pod "pod-projected-secrets-3925d151-f5bb-46f9-842b-0f606f974431" satisfied condition "Succeeded or Failed" +Apr 29 19:04:16.277: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-secrets-3925d151-f5bb-46f9-842b-0f606f974431 container projected-secret-volume-test: +STEP: delete the pod +Apr 29 19:04:16.294: INFO: Waiting for pod pod-projected-secrets-3925d151-f5bb-46f9-842b-0f606f974431 to disappear +Apr 29 19:04:16.299: INFO: Pod pod-projected-secrets-3925d151-f5bb-46f9-842b-0f606f974431 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:16.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8847" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":138,"skipped":2434,"failed":0} +S +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:16.310: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-f335da8e-148c-4bba-8990-65dfb57be09e +STEP: Creating a pod to test consume secrets +Apr 29 19:04:16.368: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c6748388-0f76-4bfc-9b79-dfe449c64b44" in namespace "projected-6653" to be "Succeeded or Failed" +Apr 29 19:04:16.375: INFO: Pod "pod-projected-secrets-c6748388-0f76-4bfc-9b79-dfe449c64b44": Phase="Pending", Reason="", readiness=false. Elapsed: 7.154541ms +Apr 29 19:04:18.380: INFO: Pod "pod-projected-secrets-c6748388-0f76-4bfc-9b79-dfe449c64b44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012097793s +STEP: Saw pod success +Apr 29 19:04:18.380: INFO: Pod "pod-projected-secrets-c6748388-0f76-4bfc-9b79-dfe449c64b44" satisfied condition "Succeeded or Failed" +Apr 29 19:04:18.383: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-secrets-c6748388-0f76-4bfc-9b79-dfe449c64b44 container projected-secret-volume-test: +STEP: delete the pod +Apr 29 19:04:18.401: INFO: Waiting for pod pod-projected-secrets-c6748388-0f76-4bfc-9b79-dfe449c64b44 to disappear +Apr 29 19:04:18.404: INFO: Pod pod-projected-secrets-c6748388-0f76-4bfc-9b79-dfe449c64b44 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:18.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6653" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":139,"skipped":2435,"failed":0} +SSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:18.417: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:18.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-6610" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":140,"skipped":2439,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:18.540: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:04:18.590: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd52a4b8-8471-406b-9e99-c88334128510" in namespace "projected-6163" to be "Succeeded or Failed" +Apr 29 19:04:18.594: INFO: Pod "downwardapi-volume-fd52a4b8-8471-406b-9e99-c88334128510": Phase="Pending", Reason="", readiness=false. Elapsed: 3.249161ms +Apr 29 19:04:20.599: INFO: Pod "downwardapi-volume-fd52a4b8-8471-406b-9e99-c88334128510": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008838765s +Apr 29 19:04:22.605: INFO: Pod "downwardapi-volume-fd52a4b8-8471-406b-9e99-c88334128510": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014867847s +STEP: Saw pod success +Apr 29 19:04:22.605: INFO: Pod "downwardapi-volume-fd52a4b8-8471-406b-9e99-c88334128510" satisfied condition "Succeeded or Failed" +Apr 29 19:04:22.611: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-fd52a4b8-8471-406b-9e99-c88334128510 container client-container: +STEP: delete the pod +Apr 29 19:04:22.635: INFO: Waiting for pod downwardapi-volume-fd52a4b8-8471-406b-9e99-c88334128510 to disappear +Apr 29 19:04:22.639: INFO: Pod downwardapi-volume-fd52a4b8-8471-406b-9e99-c88334128510 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:22.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6163" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":141,"skipped":2517,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:22.652: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:22.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6605" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":142,"skipped":2563,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:22.746: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Apr 29 19:04:27.306: INFO: Successfully updated pod "adopt-release--1-229pd" +STEP: Checking that the Job readopts the Pod +Apr 29 19:04:27.306: INFO: Waiting up to 15m0s for pod "adopt-release--1-229pd" in namespace "job-4988" to be "adopted" +Apr 29 19:04:27.311: INFO: Pod "adopt-release--1-229pd": Phase="Running", Reason="", readiness=true. Elapsed: 4.113074ms +Apr 29 19:04:29.319: INFO: Pod "adopt-release--1-229pd": Phase="Running", Reason="", readiness=true. Elapsed: 2.012298964s +Apr 29 19:04:29.319: INFO: Pod "adopt-release--1-229pd" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Apr 29 19:04:29.831: INFO: Successfully updated pod "adopt-release--1-229pd" +STEP: Checking that the Job releases the Pod +Apr 29 19:04:29.831: INFO: Waiting up to 15m0s for pod "adopt-release--1-229pd" in namespace "job-4988" to be "released" +Apr 29 19:04:29.835: INFO: Pod "adopt-release--1-229pd": Phase="Running", Reason="", readiness=true. Elapsed: 3.92657ms +Apr 29 19:04:31.839: INFO: Pod "adopt-release--1-229pd": Phase="Running", Reason="", readiness=true. Elapsed: 2.008206392s +Apr 29 19:04:31.839: INFO: Pod "adopt-release--1-229pd" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:31.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-4988" for this suite. + +• [SLOW TEST:9.104 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":143,"skipped":2574,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:31.851: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Apr 29 19:04:31.883: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Apr 29 19:04:31.899: INFO: Waiting for terminating namespaces to be deleted... +Apr 29 19:04:31.903: INFO: +Logging pods the apiserver thinks is on node tkg-mgmt-vc-control-plane-4czbf before test +Apr 29 19:04:31.919: INFO: ako-0 from avi-system started at 2022-04-29 17:51:41 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container ako-tkg-system-tkg-mgmt-vc ready: true, restart count 0 +Apr 29 19:04:31.919: INFO: capi-kubeadm-bootstrap-controller-manager-7ffb6dc8fc-8l5kl from capi-kubeadm-bootstrap-system started at 2022-04-29 01:35:11 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container manager ready: true, restart count 12 +Apr 29 19:04:31.919: INFO: capi-kubeadm-control-plane-controller-manager-667999fdb8-twv4s from capi-kubeadm-control-plane-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container manager ready: true, restart count 2 +Apr 29 19:04:31.919: INFO: capi-controller-manager-65c5769c4c-555gx from capi-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container manager ready: true, restart count 15 +Apr 29 19:04:31.919: INFO: capv-controller-manager-75bdbfb7dc-888vj from capv-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container manager ready: true, restart count 15 +Apr 29 19:04:31.919: INFO: cert-manager-cainjector-cc485fcdc-4qq4t from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container cert-manager ready: true, restart count 7 +Apr 29 19:04:31.919: INFO: cert-manager-d6b468546-pctjx from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container cert-manager ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: cert-manager-webhook-dd697458d-c6xrg from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container cert-manager ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: antrea-agent-k79rx from kube-system started at 2022-04-28 17:17:44 +0000 UTC (2 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container antrea-agent ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: Container antrea-ovs ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: antrea-controller-f84fc8fd6-clc5q from kube-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container antrea-controller ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: coredns-67c8559bb6-7k2mz from kube-system started at 2022-04-28 17:12:06 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container coredns ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: coredns-67c8559bb6-bgthp from kube-system started at 2022-04-28 17:12:06 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container coredns ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: etcd-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container etcd ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: kube-apiserver-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container kube-apiserver ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container kube-controller-manager ready: true, restart count 18 +Apr 29 19:04:31.919: INFO: kube-proxy-2fvxm from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container kube-proxy ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: kube-scheduler-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-29 16:30:09 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container kube-scheduler ready: true, restart count 18 +Apr 29 19:04:31.919: INFO: metrics-server-58bbfb986f-7q897 from kube-system started at 2022-04-29 14:28:58 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container metrics-server ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: vsphere-cloud-controller-manager-9gc8w from kube-system started at 2022-04-28 17:16:39 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 19 +Apr 29 19:04:31.919: INFO: vsphere-csi-controller-7d96796c4d-p276x from kube-system started at 2022-04-28 17:16:08 +0000 UTC (5 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container csi-attacher ready: true, restart count 21 +Apr 29 19:04:31.919: INFO: Container csi-provisioner ready: true, restart count 22 +Apr 29 19:04:31.919: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: Container vsphere-csi-controller ready: true, restart count 2 +Apr 29 19:04:31.919: INFO: Container vsphere-syncer ready: true, restart count 18 +Apr 29 19:04:31.919: INFO: vsphere-csi-node-ld676 from kube-system started at 2022-04-28 17:16:08 +0000 UTC (3 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: Container node-driver-registrar ready: true, restart count 2 +Apr 29 19:04:31.919: INFO: Container vsphere-csi-node ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2kxj9 from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 19:04:31.919: INFO: Container systemd-logs ready: false, restart count 0 +Apr 29 19:04:31.919: INFO: secretgen-controller-6dd9c95967-hfpnj from tanzu-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container secretgen-controller ready: true, restart count 0 +Apr 29 19:04:31.919: INFO: ako-operator-controller-manager-79cb9ccfc8-lwlw6 from tkg-system-networking started at 2022-04-29 17:51:37 +0000 UTC (2 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Apr 29 19:04:31.919: INFO: Container manager ready: true, restart count 0 +Apr 29 19:04:31.919: INFO: kapp-controller-5b7d886dcc-rg8d8 from tkg-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container kapp-controller ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: tanzu-addons-controller-manager-667d5c846f-f78n7 from tkg-system started at 2022-04-28 17:13:30 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container tanzu-addons-controller ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: tanzu-capabilities-controller-manager-7864dcb4b7-9jhgh from tkg-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container manager ready: true, restart count 0 +Apr 29 19:04:31.919: INFO: tanzu-featuregates-controller-manager-fb8cf8ffc-qptgc from tkg-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container manager ready: true, restart count 0 +Apr 29 19:04:31.919: INFO: tkr-controller-manager-7c99874659-rqlgx from tkr-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.919: INFO: Container manager ready: true, restart count 1 +Apr 29 19:04:31.919: INFO: +Logging pods the apiserver thinks is on node tkg-mgmt-vc-md-0-59d8b7c778-msxpc before test +Apr 29 19:04:31.930: INFO: adopt-release--1-229pd from job-4988 started at 2022-04-29 19:04:22 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.930: INFO: Container c ready: true, restart count 0 +Apr 29 19:04:31.930: INFO: adopt-release--1-skx4t from job-4988 started at 2022-04-29 19:04:29 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.930: INFO: Container c ready: true, restart count 0 +Apr 29 19:04:31.930: INFO: adopt-release--1-xp587 from job-4988 started at 2022-04-29 19:04:22 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.930: INFO: Container c ready: true, restart count 0 +Apr 29 19:04:31.930: INFO: antrea-agent-jmd5f from kube-system started at 2022-04-28 17:17:22 +0000 UTC (2 container statuses recorded) +Apr 29 19:04:31.930: INFO: Container antrea-agent ready: true, restart count 1 +Apr 29 19:04:31.930: INFO: Container antrea-ovs ready: true, restart count 1 +Apr 29 19:04:31.930: INFO: kube-proxy-gqrhv from kube-system started at 2022-04-28 17:12:43 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.930: INFO: Container kube-proxy ready: true, restart count 1 +Apr 29 19:04:31.930: INFO: vsphere-csi-node-fxcc9 from kube-system started at 2022-04-28 17:16:08 +0000 UTC (3 container statuses recorded) +Apr 29 19:04:31.930: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 19:04:31.930: INFO: Container node-driver-registrar ready: true, restart count 4 +Apr 29 19:04:31.930: INFO: Container vsphere-csi-node ready: true, restart count 1 +Apr 29 19:04:31.930: INFO: sonobuoy from sonobuoy started at 2022-04-29 18:13:55 +0000 UTC (1 container statuses recorded) +Apr 29 19:04:31.930: INFO: Container kube-sonobuoy ready: true, restart count 0 +Apr 29 19:04:31.930: INFO: sonobuoy-e2e-job-d928f42f9304448b from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 19:04:31.930: INFO: Container e2e ready: true, restart count 0 +Apr 29 19:04:31.930: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 19:04:31.930: INFO: sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2lph2 from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 19:04:31.930: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 19:04:31.930: INFO: Container systemd-logs ready: false, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.16ea736090b485b2], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match Pod's node affinity/selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:32.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-7235" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":346,"completed":144,"skipped":2586,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:32.988: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test service account token: +Apr 29 19:04:33.034: INFO: Waiting up to 5m0s for pod "test-pod-7e5c4dc3-3b8d-45c2-a00c-2a011a01152c" in namespace "svcaccounts-2177" to be "Succeeded or Failed" +Apr 29 19:04:33.039: INFO: Pod "test-pod-7e5c4dc3-3b8d-45c2-a00c-2a011a01152c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.348297ms +Apr 29 19:04:35.045: INFO: Pod "test-pod-7e5c4dc3-3b8d-45c2-a00c-2a011a01152c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0109237s +STEP: Saw pod success +Apr 29 19:04:35.045: INFO: Pod "test-pod-7e5c4dc3-3b8d-45c2-a00c-2a011a01152c" satisfied condition "Succeeded or Failed" +Apr 29 19:04:35.049: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod test-pod-7e5c4dc3-3b8d-45c2-a00c-2a011a01152c container agnhost-container: +STEP: delete the pod +Apr 29 19:04:35.069: INFO: Waiting for pod test-pod-7e5c4dc3-3b8d-45c2-a00c-2a011a01152c to disappear +Apr 29 19:04:35.072: INFO: Pod test-pod-7e5c4dc3-3b8d-45c2-a00c-2a011a01152c no longer exists +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:35.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-2177" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":145,"skipped":2598,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:35.083: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating secret secrets-9586/secret-test-1100507b-11e9-4e9f-b3c8-3c639653d3e8 +STEP: Creating a pod to test consume secrets +Apr 29 19:04:35.139: INFO: Waiting up to 5m0s for pod "pod-configmaps-702d0beb-4467-41d5-bb84-ee57c7399fbb" in namespace "secrets-9586" to be "Succeeded or Failed" +Apr 29 19:04:35.145: INFO: Pod "pod-configmaps-702d0beb-4467-41d5-bb84-ee57c7399fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204065ms +Apr 29 19:04:37.149: INFO: Pod "pod-configmaps-702d0beb-4467-41d5-bb84-ee57c7399fbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010038631s +STEP: Saw pod success +Apr 29 19:04:37.149: INFO: Pod "pod-configmaps-702d0beb-4467-41d5-bb84-ee57c7399fbb" satisfied condition "Succeeded or Failed" +Apr 29 19:04:37.153: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-702d0beb-4467-41d5-bb84-ee57c7399fbb container env-test: +STEP: delete the pod +Apr 29 19:04:37.171: INFO: Waiting for pod pod-configmaps-702d0beb-4467-41d5-bb84-ee57c7399fbb to disappear +Apr 29 19:04:37.174: INFO: Pod pod-configmaps-702d0beb-4467-41d5-bb84-ee57c7399fbb no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:04:37.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9586" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":146,"skipped":2624,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:04:37.186: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:04:37.238: INFO: created pod +Apr 29 19:04:37.238: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-5882" to be "Succeeded or Failed" +Apr 29 19:04:37.244: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.595316ms +Apr 29 19:04:39.249: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010594839s +STEP: Saw pod success +Apr 29 19:04:39.249: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Apr 29 19:05:09.250: INFO: polling logs +Apr 29 19:05:09.260: INFO: Pod logs: +2022/04/29 19:04:38 OK: Got token +2022/04/29 19:04:38 validating with in-cluster discovery +2022/04/29 19:04:38 OK: got issuer https://kubernetes.default.svc.cluster.local +2022/04/29 19:04:38 Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-5882:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1651259677, NotBefore:1651259077, IssuedAt:1651259077, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-5882", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"d40cfba8-e2c0-4b31-ae1b-c0ada5529749"}}} +2022/04/29 19:04:38 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local +2022/04/29 19:04:38 OK: Validated signature on JWT +2022/04/29 19:04:38 OK: Got valid claims from token! +2022/04/29 19:04:38 Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-5882:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1651259677, NotBefore:1651259077, IssuedAt:1651259077, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-5882", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"d40cfba8-e2c0-4b31-ae1b-c0ada5529749"}}} + +Apr 29 19:05:09.260: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:05:09.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-5882" for this suite. + +• [SLOW TEST:32.094 seconds] +[sig-auth] ServiceAccounts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":147,"skipped":2647,"failed":0} +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:05:09.280: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:05:09.312: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Apr 29 19:05:09.323: INFO: The status of Pod pod-logs-websocket-5922f34b-586b-43c7-b202-175bb2b93a51 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:05:11.329: INFO: The status of Pod pod-logs-websocket-5922f34b-586b-43c7-b202-175bb2b93a51 is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:05:11.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1566" for this suite. +•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":148,"skipped":2647,"failed":0} +SSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:05:11.358: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:05:11.403: INFO: The status of Pod busybox-host-aliases3023abe2-55fb-48d4-839a-887878f95b0a is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:05:13.408: INFO: The status of Pod busybox-host-aliases3023abe2-55fb-48d4-839a-887878f95b0a is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:05:13.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-8457" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":149,"skipped":2650,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:05:13.432: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Apr 29 19:05:13.481: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Apr 29 19:05:13.481: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Apr 29 19:05:13.486: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Apr 29 19:05:13.486: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Apr 29 19:05:13.501: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Apr 29 19:05:13.501: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Apr 29 19:05:13.527: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Apr 29 19:05:13.527: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Apr 29 19:05:14.904: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Apr 29 19:05:14.904: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Apr 29 19:05:15.324: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Apr 29 19:05:15.335: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 0 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:15.340: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:15.346: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:15.346: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:15.358: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:15.358: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:15.371: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:15.371: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:15.377: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:15.377: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:16.930: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:16.930: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:16.967: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +STEP: listing Deployments +Apr 29 19:05:16.973: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Apr 29 19:05:16.991: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Apr 29 19:05:17.003: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Apr 29 19:05:17.003: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Apr 29 19:05:17.031: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Apr 29 19:05:17.047: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Apr 29 19:05:18.358: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Apr 29 19:05:18.931: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Apr 29 19:05:18.955: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Apr 29 19:05:18.967: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Apr 29 19:05:20.357: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Apr 29 19:05:20.405: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:20.405: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:20.406: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:20.406: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 1 +Apr 29 19:05:20.406: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:20.406: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 3 +Apr 29 19:05:20.406: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:20.406: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 2 +Apr 29 19:05:20.406: INFO: observed Deployment test-deployment in namespace deployment-1201 with ReadyReplicas 3 +STEP: deleting the Deployment +Apr 29 19:05:20.417: INFO: observed event type MODIFIED +Apr 29 19:05:20.417: INFO: observed event type MODIFIED +Apr 29 19:05:20.417: INFO: observed event type MODIFIED +Apr 29 19:05:20.418: INFO: observed event type MODIFIED +Apr 29 19:05:20.418: INFO: observed event type MODIFIED +Apr 29 19:05:20.418: INFO: observed event type MODIFIED +Apr 29 19:05:20.418: INFO: observed event type MODIFIED +Apr 29 19:05:20.418: INFO: observed event type MODIFIED +Apr 29 19:05:20.418: INFO: observed event type MODIFIED +Apr 29 19:05:20.418: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Apr 29 19:05:20.428: INFO: Log out all the ReplicaSets if there is no deployment created +Apr 29 19:05:20.436: INFO: ReplicaSet "test-deployment-56c98d85f9": +&ReplicaSet{ObjectMeta:{test-deployment-56c98d85f9 deployment-1201 c2671e4b-87d2-4475-8e95-44ba3f491db1 739375 4 2022-04-29 19:05:15 +0000 UTC map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 7ce42266-21f3-49bd-9b67-3d8a9116c375 0xc00413fbe7 0xc00413fbe8}] [] [{kube-controller-manager Update apps/v1 2022-04-29 19:05:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ce42266-21f3-49bd-9b67-3d8a9116c375\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:05:20 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 56c98d85f9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.5 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00413fc80 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Apr 29 19:05:20.443: INFO: pod: "test-deployment-56c98d85f9-cx6dz": +&Pod{ObjectMeta:{test-deployment-56c98d85f9-cx6dz test-deployment-56c98d85f9- deployment-1201 86869ee7-1aed-4056-8ce8-c808062539c2 739371 0 2022-04-29 19:05:16 +0000 UTC 2022-04-29 19:05:21 +0000 UTC 0xc0037380c8 map[pod-template-hash:56c98d85f9 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-56c98d85f9 c2671e4b-87d2-4475-8e95-44ba3f491db1 0xc0037380f7 0xc0037380f8}] [] [{kube-controller-manager Update v1 2022-04-29 19:05:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c2671e4b-87d2-4475-8e95-44ba3f491db1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 19:05:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.127\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kk8dv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kk8dv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:100.96.0.127,StartTime:2022-04-29 19:05:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 19:05:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.5,ImageID:k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07,ContainerID:containerd://6b7c20cf3aa38e6af86a2fbd902595857d997dd493c2f0fb5f5e924fa432b669,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.127,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Apr 29 19:05:20.444: INFO: ReplicaSet "test-deployment-855f7994f9": +&ReplicaSet{ObjectMeta:{test-deployment-855f7994f9 deployment-1201 710a154e-c117-4314-bad0-892154bd99df 739267 3 2022-04-29 19:05:13 +0000 UTC map[pod-template-hash:855f7994f9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 7ce42266-21f3-49bd-9b67-3d8a9116c375 0xc00413fce7 0xc00413fce8}] [] [{kube-controller-manager Update apps/v1 2022-04-29 19:05:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ce42266-21f3-49bd-9b67-3d8a9116c375\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:05:16 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 855f7994f9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:855f7994f9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00413fd70 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Apr 29 19:05:20.447: INFO: ReplicaSet "test-deployment-d4dfddfbf": +&ReplicaSet{ObjectMeta:{test-deployment-d4dfddfbf deployment-1201 5305ed20-53e9-4f9e-a06c-2a53ac9b735c 739367 2 2022-04-29 19:05:17 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 7ce42266-21f3-49bd-9b67-3d8a9116c375 0xc00413fdd7 0xc00413fdd8}] [] [{kube-controller-manager Update apps/v1 2022-04-29 19:05:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ce42266-21f3-49bd-9b67-3d8a9116c375\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:05:18 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: d4dfddfbf,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00413fe60 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Apr 29 19:05:20.451: INFO: pod: "test-deployment-d4dfddfbf-bdbs9": +&Pod{ObjectMeta:{test-deployment-d4dfddfbf-bdbs9 test-deployment-d4dfddfbf- deployment-1201 d8ee69ac-2563-4857-a0cf-1329f985f5c0 739366 0 2022-04-29 19:05:18 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-d4dfddfbf 5305ed20-53e9-4f9e-a06c-2a53ac9b735c 0xc003738ec7 0xc003738ec8}] [] [{kube-controller-manager Update v1 2022-04-29 19:05:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5305ed20-53e9-4f9e-a06c-2a53ac9b735c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 19:05:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.128\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hnjck,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnjck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-control-plane-4czbf,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.111.35,PodIP:100.96.0.128,StartTime:2022-04-29 19:05:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 19:05:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://7df4f59048f607863f118dd64336c529c539a005d66ab1a40ac7eacae683e387,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.0.128,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Apr 29 19:05:20.452: INFO: pod: "test-deployment-d4dfddfbf-mgk8j": +&Pod{ObjectMeta:{test-deployment-d4dfddfbf-mgk8j test-deployment-d4dfddfbf- deployment-1201 602b1e72-4339-41b1-80b6-0b09db14ad47 739327 0 2022-04-29 19:05:17 +0000 UTC map[pod-template-hash:d4dfddfbf test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-d4dfddfbf 5305ed20-53e9-4f9e-a06c-2a53ac9b735c 0xc0037390a7 0xc0037390a8}] [] [{kube-controller-manager Update v1 2022-04-29 19:05:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5305ed20-53e9-4f9e-a06c-2a53ac9b735c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 19:05:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.243\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-89ng8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89ng8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:05:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.243,StartTime:2022-04-29 19:05:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 19:05:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://d6cc837aa5d7bc5307b63ef75a08057e2cde2b855e3e4cfd27463cfac391859c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:05:20.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1201" for this suite. + +• [SLOW TEST:7.037 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":150,"skipped":2661,"failed":0} +SS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:05:20.469: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:05:31.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9551" for this suite. + +• [SLOW TEST:11.152 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":151,"skipped":2663,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:05:31.625: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:05:31.708: INFO: The status of Pod busybox-scheduling-3069ede9-b914-4848-ae47-47cb8fe7bb04 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:05:33.715: INFO: The status of Pod busybox-scheduling-3069ede9-b914-4848-ae47-47cb8fe7bb04 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:05:35.715: INFO: The status of Pod busybox-scheduling-3069ede9-b914-4848-ae47-47cb8fe7bb04 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:05:35.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3485" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":152,"skipped":2675,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:05:35.747: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:05:37.809: INFO: Deleting pod "var-expansion-cbe08fac-1da8-44de-981c-59013402a443" in namespace "var-expansion-5620" +Apr 29 19:05:37.816: INFO: Wait up to 5m0s for pod "var-expansion-cbe08fac-1da8-44de-981c-59013402a443" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:05:39.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5620" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":153,"skipped":2687,"failed":0} +SSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:05:39.841: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-6jgnt in namespace proxy-5499 +I0429 19:05:39.902247 25 runners.go:190] Created replication controller with name: proxy-service-6jgnt, namespace: proxy-5499, replica count: 1 +I0429 19:05:40.957917 25 runners.go:190] proxy-service-6jgnt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0429 19:05:41.959032 25 runners.go:190] proxy-service-6jgnt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0429 19:05:42.959362 25 runners.go:190] proxy-service-6jgnt Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 19:05:42.965: INFO: setup took 3.088755885s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Apr 29 19:05:42.980: INFO: (0) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 14.492307ms) +Apr 29 19:05:42.980: INFO: (0) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 14.289073ms) +Apr 29 19:05:42.981: INFO: (0) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 15.567688ms) +Apr 29 19:05:42.982: INFO: (0) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 15.545873ms) +Apr 29 19:05:42.982: INFO: (0) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 16.066691ms) +Apr 29 19:05:42.982: INFO: (0) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 16.492308ms) +Apr 29 19:05:42.982: INFO: (0) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 16.525261ms) +Apr 29 19:05:42.982: INFO: (0) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 16.020761ms) +Apr 29 19:05:42.982: INFO: (0) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 16.203167ms) +Apr 29 19:05:42.982: INFO: (0) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 16.02727ms) +Apr 29 19:05:42.982: INFO: (0) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 15.715799ms) +Apr 29 19:05:42.993: INFO: (0) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 27.076303ms) +Apr 29 19:05:42.994: INFO: (0) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 27.769704ms) +Apr 29 19:05:42.994: INFO: (0) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 27.877343ms) +Apr 29 19:05:42.994: INFO: (0) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: ... (200; 9.43397ms) +Apr 29 19:05:43.004: INFO: (1) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 9.385283ms) +Apr 29 19:05:43.004: INFO: (1) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 9.674714ms) +Apr 29 19:05:43.004: INFO: (1) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 9.285436ms) +Apr 29 19:05:43.004: INFO: (1) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 9.653824ms) +Apr 29 19:05:43.004: INFO: (1) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 9.526188ms) +Apr 29 19:05:43.004: INFO: (1) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 9.487636ms) +Apr 29 19:05:43.005: INFO: (1) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 10.051554ms) +Apr 29 19:05:43.005: INFO: (1) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 10.416875ms) +Apr 29 19:05:43.005: INFO: (1) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 10.408081ms) +Apr 29 19:05:43.005: INFO: (1) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 10.566969ms) +Apr 29 19:05:43.013: INFO: (2) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 6.329181ms) +Apr 29 19:05:43.013: INFO: (2) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 7.501545ms) +Apr 29 19:05:43.013: INFO: (2) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 7.491985ms) +Apr 29 19:05:43.013: INFO: (2) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 6.540791ms) +Apr 29 19:05:43.014: INFO: (2) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 7.754578ms) +Apr 29 19:05:43.014: INFO: (2) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 8.033422ms) +Apr 29 19:05:43.014: INFO: (2) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 7.048542ms) +Apr 29 19:05:43.014: INFO: (2) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 8.597578ms) +Apr 29 19:05:43.014: INFO: (2) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test<... (200; 5.240797ms) +Apr 29 19:05:43.021: INFO: (3) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 4.937793ms) +Apr 29 19:05:43.021: INFO: (3) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 5.156263ms) +Apr 29 19:05:43.026: INFO: (3) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 8.008111ms) +Apr 29 19:05:43.026: INFO: (3) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 9.020173ms) +Apr 29 19:05:43.026: INFO: (3) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 8.921324ms) +Apr 29 19:05:43.027: INFO: (3) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 8.337603ms) +Apr 29 19:05:43.026: INFO: (3) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 8.354305ms) +Apr 29 19:05:43.027: INFO: (3) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 9.800781ms) +Apr 29 19:05:43.027: INFO: (3) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 7.740523ms) +Apr 29 19:05:43.027: INFO: (3) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 8.781965ms) +Apr 29 19:05:43.027: INFO: (3) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 9.555033ms) +Apr 29 19:05:43.027: INFO: (3) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test (200; 6.168915ms) +Apr 29 19:05:43.035: INFO: (4) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 6.412181ms) +Apr 29 19:05:43.035: INFO: (4) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 6.705429ms) +Apr 29 19:05:43.035: INFO: (4) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 6.700544ms) +Apr 29 19:05:43.035: INFO: (4) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 6.411084ms) +Apr 29 19:05:43.035: INFO: (4) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 6.517618ms) +Apr 29 19:05:43.035: INFO: (4) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: ... (200; 7.929119ms) +Apr 29 19:05:43.045: INFO: (5) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 8.078928ms) +Apr 29 19:05:43.045: INFO: (5) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 7.93737ms) +Apr 29 19:05:43.045: INFO: (5) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 8.212403ms) +Apr 29 19:05:43.045: INFO: (5) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 7.965057ms) +Apr 29 19:05:43.047: INFO: (5) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 9.276932ms) +Apr 29 19:05:43.047: INFO: (5) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test<... (200; 9.73565ms) +Apr 29 19:05:43.049: INFO: (5) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 11.548586ms) +Apr 29 19:05:43.049: INFO: (5) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 11.346192ms) +Apr 29 19:05:43.049: INFO: (5) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 11.681779ms) +Apr 29 19:05:43.049: INFO: (5) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 11.50793ms) +Apr 29 19:05:43.049: INFO: (5) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 11.578414ms) +Apr 29 19:05:43.050: INFO: (5) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 12.891091ms) +Apr 29 19:05:43.060: INFO: (6) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test (200; 10.255942ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 9.662755ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 9.630917ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 9.940352ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 9.86855ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 10.282036ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 9.893738ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 9.897118ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 9.927782ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 10.12694ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 10.024303ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 9.981254ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 10.05565ms) +Apr 29 19:05:43.061: INFO: (6) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 10.274753ms) +Apr 29 19:05:43.067: INFO: (7) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: ... (200; 7.615449ms) +Apr 29 19:05:43.071: INFO: (7) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 9.016879ms) +Apr 29 19:05:43.071: INFO: (7) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 9.215715ms) +Apr 29 19:05:43.071: INFO: (7) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 9.030386ms) +Apr 29 19:05:43.071: INFO: (7) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 9.334666ms) +Apr 29 19:05:43.071: INFO: (7) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 9.225108ms) +Apr 29 19:05:43.072: INFO: (7) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 9.730298ms) +Apr 29 19:05:43.072: INFO: (7) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 9.933773ms) +Apr 29 19:05:43.072: INFO: (7) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 10.121323ms) +Apr 29 19:05:43.072: INFO: (7) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 10.00096ms) +Apr 29 19:05:43.072: INFO: (7) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 9.867781ms) +Apr 29 19:05:43.076: INFO: (8) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test<... (200; 4.336071ms) +Apr 29 19:05:43.081: INFO: (8) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 7.438741ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 7.194836ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 7.616847ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 7.176422ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 8.780941ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 8.52398ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 8.055554ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 9.862755ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 7.622549ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 9.369029ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 10.112018ms) +Apr 29 19:05:43.082: INFO: (8) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 9.629519ms) +Apr 29 19:05:43.083: INFO: (8) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 9.485798ms) +Apr 29 19:05:43.083: INFO: (8) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 9.755066ms) +Apr 29 19:05:43.087: INFO: (9) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 3.510138ms) +Apr 29 19:05:43.090: INFO: (9) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 6.83705ms) +Apr 29 19:05:43.102: INFO: (9) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 18.125126ms) +Apr 29 19:05:43.102: INFO: (9) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 18.375454ms) +Apr 29 19:05:43.102: INFO: (9) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test<... (200; 18.548583ms) +Apr 29 19:05:43.102: INFO: (9) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 18.626683ms) +Apr 29 19:05:43.102: INFO: (9) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 18.640207ms) +Apr 29 19:05:43.102: INFO: (9) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 18.73864ms) +Apr 29 19:05:43.103: INFO: (9) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 18.855481ms) +Apr 29 19:05:43.105: INFO: (9) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 21.424431ms) +Apr 29 19:05:43.109: INFO: (9) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 25.080399ms) +Apr 29 19:05:43.109: INFO: (9) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 25.127265ms) +Apr 29 19:05:43.109: INFO: (9) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 25.403347ms) +Apr 29 19:05:43.109: INFO: (9) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 25.721169ms) +Apr 29 19:05:43.116: INFO: (10) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 6.338881ms) +Apr 29 19:05:43.117: INFO: (10) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 7.159213ms) +Apr 29 19:05:43.117: INFO: (10) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 7.303807ms) +Apr 29 19:05:43.117: INFO: (10) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 7.118772ms) +Apr 29 19:05:43.117: INFO: (10) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 6.982724ms) +Apr 29 19:05:43.117: INFO: (10) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 7.05013ms) +Apr 29 19:05:43.117: INFO: (10) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 7.198722ms) +Apr 29 19:05:43.117: INFO: (10) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test (200; 6.946125ms) +Apr 29 19:05:43.118: INFO: (10) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 8.369617ms) +Apr 29 19:05:43.118: INFO: (10) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 8.719954ms) +Apr 29 19:05:43.118: INFO: (10) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 8.705592ms) +Apr 29 19:05:43.118: INFO: (10) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 8.550839ms) +Apr 29 19:05:43.120: INFO: (10) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 9.833264ms) +Apr 29 19:05:43.120: INFO: (10) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 9.832648ms) +Apr 29 19:05:43.120: INFO: (10) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 10.720893ms) +Apr 29 19:05:43.125: INFO: (11) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 4.293174ms) +Apr 29 19:05:43.125: INFO: (11) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: ... (200; 8.17968ms) +Apr 29 19:05:43.129: INFO: (11) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 8.210125ms) +Apr 29 19:05:43.129: INFO: (11) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 8.719359ms) +Apr 29 19:05:43.129: INFO: (11) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 8.341753ms) +Apr 29 19:05:43.130: INFO: (11) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 8.600199ms) +Apr 29 19:05:43.130: INFO: (11) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 8.732684ms) +Apr 29 19:05:43.130: INFO: (11) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 8.855664ms) +Apr 29 19:05:43.130: INFO: (11) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 8.792903ms) +Apr 29 19:05:43.130: INFO: (11) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 8.84322ms) +Apr 29 19:05:43.130: INFO: (11) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 8.958452ms) +Apr 29 19:05:43.130: INFO: (11) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 8.940901ms) +Apr 29 19:05:43.130: INFO: (11) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 8.989572ms) +Apr 29 19:05:43.131: INFO: (11) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 9.85708ms) +Apr 29 19:05:43.131: INFO: (11) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 10.313165ms) +Apr 29 19:05:43.137: INFO: (12) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 5.661498ms) +Apr 29 19:05:43.137: INFO: (12) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 5.654182ms) +Apr 29 19:05:43.137: INFO: (12) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 5.755452ms) +Apr 29 19:05:43.137: INFO: (12) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 6.08549ms) +Apr 29 19:05:43.137: INFO: (12) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 5.865186ms) +Apr 29 19:05:43.141: INFO: (12) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 9.164821ms) +Apr 29 19:05:43.141: INFO: (12) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 9.234074ms) +Apr 29 19:05:43.141: INFO: (12) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test<... (200; 9.161756ms) +Apr 29 19:05:43.151: INFO: (13) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 9.434434ms) +Apr 29 19:05:43.151: INFO: (13) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 9.619091ms) +Apr 29 19:05:43.151: INFO: (13) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test (200; 9.760821ms) +Apr 29 19:05:43.159: INFO: (14) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 6.339481ms) +Apr 29 19:05:43.159: INFO: (14) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test (200; 6.82309ms) +Apr 29 19:05:43.159: INFO: (14) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 7.013249ms) +Apr 29 19:05:43.159: INFO: (14) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 6.680923ms) +Apr 29 19:05:43.159: INFO: (14) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 7.185364ms) +Apr 29 19:05:43.159: INFO: (14) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 6.365069ms) +Apr 29 19:05:43.160: INFO: (14) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 7.685127ms) +Apr 29 19:05:43.160: INFO: (14) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 8.264175ms) +Apr 29 19:05:43.160: INFO: (14) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 7.348278ms) +Apr 29 19:05:43.160: INFO: (14) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 8.169706ms) +Apr 29 19:05:43.175: INFO: (15) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 12.842723ms) +Apr 29 19:05:43.175: INFO: (15) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 12.820689ms) +Apr 29 19:05:43.175: INFO: (15) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 12.993154ms) +Apr 29 19:05:43.175: INFO: (15) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: ... (200; 12.974214ms) +Apr 29 19:05:43.175: INFO: (15) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 13.166254ms) +Apr 29 19:05:43.175: INFO: (15) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 13.453487ms) +Apr 29 19:05:43.175: INFO: (15) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 13.649202ms) +Apr 29 19:05:43.175: INFO: (15) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 13.885957ms) +Apr 29 19:05:43.176: INFO: (15) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 13.648137ms) +Apr 29 19:05:43.176: INFO: (15) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 13.549999ms) +Apr 29 19:05:43.176: INFO: (15) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 13.638324ms) +Apr 29 19:05:43.176: INFO: (15) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 13.796738ms) +Apr 29 19:05:43.176: INFO: (15) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 13.956823ms) +Apr 29 19:05:43.176: INFO: (15) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 13.95811ms) +Apr 29 19:05:43.176: INFO: (15) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 13.710064ms) +Apr 29 19:05:43.182: INFO: (16) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 5.814842ms) +Apr 29 19:05:43.184: INFO: (16) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname1/proxy/: foo (200; 7.942682ms) +Apr 29 19:05:43.184: INFO: (16) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 7.99904ms) +Apr 29 19:05:43.185: INFO: (16) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 8.998765ms) +Apr 29 19:05:43.185: INFO: (16) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 9.535818ms) +Apr 29 19:05:43.186: INFO: (16) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 9.570048ms) +Apr 29 19:05:43.186: INFO: (16) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 10.141001ms) +Apr 29 19:05:43.187: INFO: (16) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test (200; 10.867493ms) +Apr 29 19:05:43.187: INFO: (16) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 10.524603ms) +Apr 29 19:05:43.187: INFO: (16) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 10.430952ms) +Apr 29 19:05:43.187: INFO: (16) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 10.551669ms) +Apr 29 19:05:43.187: INFO: (16) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 11.014882ms) +Apr 29 19:05:43.188: INFO: (16) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname1/proxy/: tls baz (200; 11.174915ms) +Apr 29 19:05:43.188: INFO: (16) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 11.067721ms) +Apr 29 19:05:43.188: INFO: (16) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname1/proxy/: foo (200; 11.500566ms) +Apr 29 19:05:43.196: INFO: (17) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 7.64114ms) +Apr 29 19:05:43.196: INFO: (17) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 8.254477ms) +Apr 29 19:05:43.197: INFO: (17) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 8.73221ms) +Apr 29 19:05:43.197: INFO: (17) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 8.434294ms) +Apr 29 19:05:43.197: INFO: (17) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: ... (200; 13.416212ms) +Apr 29 19:05:43.201: INFO: (17) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 12.990211ms) +Apr 29 19:05:43.201: INFO: (17) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 13.089211ms) +Apr 29 19:05:43.201: INFO: (17) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 12.968587ms) +Apr 29 19:05:43.202: INFO: (17) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 13.057539ms) +Apr 29 19:05:43.209: INFO: (18) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 7.399186ms) +Apr 29 19:05:43.209: INFO: (18) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq/proxy/: test (200; 7.418993ms) +Apr 29 19:05:43.210: INFO: (18) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 7.712542ms) +Apr 29 19:05:43.210: INFO: (18) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 8.428577ms) +Apr 29 19:05:43.210: INFO: (18) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 8.352881ms) +Apr 29 19:05:43.210: INFO: (18) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:160/proxy/: foo (200; 8.113575ms) +Apr 29 19:05:43.210: INFO: (18) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:460/proxy/: tls baz (200; 8.166068ms) +Apr 29 19:05:43.210: INFO: (18) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 8.023702ms) +Apr 29 19:05:43.210: INFO: (18) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:162/proxy/: bar (200; 8.114697ms) +Apr 29 19:05:43.210: INFO: (18) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:443/proxy/: test (200; 12.237945ms) +Apr 29 19:05:43.226: INFO: (19) /api/v1/namespaces/proxy-5499/services/http:proxy-service-6jgnt:portname2/proxy/: bar (200; 11.890242ms) +Apr 29 19:05:43.226: INFO: (19) /api/v1/namespaces/proxy-5499/pods/proxy-service-6jgnt-dn6jq:1080/proxy/: test<... (200; 12.087123ms) +Apr 29 19:05:43.227: INFO: (19) /api/v1/namespaces/proxy-5499/services/https:proxy-service-6jgnt:tlsportname2/proxy/: tls qux (200; 12.586447ms) +Apr 29 19:05:43.227: INFO: (19) /api/v1/namespaces/proxy-5499/pods/https:proxy-service-6jgnt-dn6jq:462/proxy/: tls qux (200; 13.334849ms) +Apr 29 19:05:43.228: INFO: (19) /api/v1/namespaces/proxy-5499/pods/http:proxy-service-6jgnt-dn6jq:1080/proxy/: ... (200; 13.625237ms) +Apr 29 19:05:43.228: INFO: (19) /api/v1/namespaces/proxy-5499/services/proxy-service-6jgnt:portname2/proxy/: bar (200; 13.571467ms) +STEP: deleting ReplicationController proxy-service-6jgnt in namespace proxy-5499, will wait for the garbage collector to delete the pods +Apr 29 19:05:43.290: INFO: Deleting ReplicationController proxy-service-6jgnt took: 7.352642ms +Apr 29 19:05:43.392: INFO: Terminating ReplicationController proxy-service-6jgnt pods took: 101.47859ms +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:05:46.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-5499" for this suite. + +• [SLOW TEST:6.266 seconds] +[sig-network] Proxy +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":346,"completed":154,"skipped":2690,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:05:46.108: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:05:57.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4823" for this suite. + +• [SLOW TEST:11.126 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":155,"skipped":2694,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:05:57.236: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:05:57.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7310 create -f -' +Apr 29 19:06:00.701: INFO: stderr: "" +Apr 29 19:06:00.701: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Apr 29 19:06:00.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7310 create -f -' +Apr 29 19:06:01.401: INFO: stderr: "" +Apr 29 19:06:01.401: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Apr 29 19:06:02.409: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:06:02.409: INFO: Found 0 / 1 +Apr 29 19:06:03.407: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:06:03.408: INFO: Found 1 / 1 +Apr 29 19:06:03.408: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Apr 29 19:06:03.413: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:06:03.413: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Apr 29 19:06:03.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7310 describe pod agnhost-primary-77gwc' +Apr 29 19:06:03.515: INFO: stderr: "" +Apr 29 19:06:03.515: INFO: stdout: "Name: agnhost-primary-77gwc\nNamespace: kubectl-7310\nPriority: 0\nNode: tkg-mgmt-vc-md-0-59d8b7c778-msxpc/10.180.99.66\nStart Time: Fri, 29 Apr 2022 19:06:00 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 100.96.1.247\nIPs:\n IP: 100.96.1.247\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://fa138d19dffdc7eda93dc8a80a3cf6c7b051d5b4edf25602fe7bddb6b9bef262\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 29 Apr 2022 19:06:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cdrmx (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-cdrmx:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-7310/agnhost-primary-77gwc to tkg-mgmt-vc-md-0-59d8b7c778-msxpc\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" +Apr 29 19:06:03.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7310 describe rc agnhost-primary' +Apr 29 19:06:03.613: INFO: stderr: "" +Apr 29 19:06:03.613: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7310\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-77gwc\n" +Apr 29 19:06:03.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7310 describe service agnhost-primary' +Apr 29 19:06:03.709: INFO: stderr: "" +Apr 29 19:06:03.709: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7310\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 100.65.225.49\nIPs: 100.65.225.49\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 100.96.1.247:6379\nSession Affinity: None\nEvents: \n" +Apr 29 19:06:03.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7310 describe node tkg-mgmt-vc-control-plane-4czbf' +Apr 29 19:06:03.840: INFO: stderr: "" +Apr 29 19:06:03.840: INFO: stdout: "Name: tkg-mgmt-vc-control-plane-4czbf\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=vsphere-vm.cpu-2.mem-8gb.os-photon\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=tkg-mgmt-vc-control-plane-4czbf\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\n node.kubernetes.io/instance-type=vsphere-vm.cpu-2.mem-8gb.os-photon\nAnnotations: cluster.x-k8s.io/cluster-name: tkg-mgmt-vc\n cluster.x-k8s.io/cluster-namespace: tkg-system\n cluster.x-k8s.io/machine: tkg-mgmt-vc-control-plane-4czbf\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: tkg-mgmt-vc-control-plane\n csi.volume.kubernetes.io/nodeid: {\"csi.vsphere.vmware.com\":\"tkg-mgmt-vc-control-plane-4czbf\"}\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Thu, 28 Apr 2022 17:10:39 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: tkg-mgmt-vc-control-plane-4czbf\n AcquireTime: \n RenewTime: Fri, 29 Apr 2022 19:05:57 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 29 Apr 2022 19:02:51 +0000 Thu, 28 Apr 2022 17:10:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 29 Apr 2022 19:02:51 +0000 Thu, 28 Apr 2022 17:10:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 29 Apr 2022 19:02:51 +0000 Thu, 28 Apr 2022 17:10:38 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 29 Apr 2022 19:02:51 +0000 Thu, 28 Apr 2022 17:11:48 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n Hostname: tkg-mgmt-vc-control-plane-4czbf\n InternalIP: 10.180.111.35\n ExternalIP: 10.180.111.35\nCapacity:\n cpu: 6\n ephemeral-storage: 41138380Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 24682268Ki\n pods: 110\nAllocatable:\n cpu: 6\n ephemeral-storage: 37913130946\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 24579868Ki\n pods: 110\nSystem Info:\n Machine ID: 472fd8ae723249409fbbc467566df64f\n System UUID: c8161442-990e-eb98-526b-47a5bc71caac\n Boot ID: 2e54b8b0-af44-4cf4-a069-43dcf85926b4\n Kernel Version: 4.19.232-2.ph3\n OS Image: VMware Photon OS/Linux\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.9\n Kubelet Version: v1.22.8+vmware.1\n Kube-Proxy Version: v1.22.8+vmware.1\nPodCIDR: 100.96.0.0/24\nPodCIDRs: 100.96.0.0/24\nProviderID: vsphere://421416c8-0e99-98eb-526b-47a5bc71caac\nNon-terminated Pods: (29 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n avi-system ako-0 100m (1%) 350m (5%) 200Mi (0%) 400Mi (1%) 74m\n capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-7ffb6dc8fc-8l5kl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17h\n capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-667999fdb8-twv4s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18h\n capi-system capi-controller-manager-65c5769c4c-555gx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18h\n capv-system capv-controller-manager-75bdbfb7dc-888vj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18h\n cert-manager cert-manager-cainjector-cc485fcdc-4qq4t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h37m\n cert-manager cert-manager-d6b468546-pctjx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h37m\n cert-manager cert-manager-webhook-dd697458d-c6xrg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h37m\n kube-system antrea-agent-k79rx 400m (6%) 0 (0%) 0 (0%) 0 (0%) 25h\n kube-system antrea-controller-f84fc8fd6-clc5q 200m (3%) 0 (0%) 0 (0%) 0 (0%) 18h\n kube-system coredns-67c8559bb6-7k2mz 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 25h\n kube-system coredns-67c8559bb6-bgthp 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 25h\n kube-system etcd-tkg-mgmt-vc-control-plane-4czbf 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 25h\n kube-system kube-apiserver-tkg-mgmt-vc-control-plane-4czbf 250m (4%) 0 (0%) 0 (0%) 0 (0%) 25h\n kube-system kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf 200m (3%) 0 (0%) 0 (0%) 0 (0%) 25h\n kube-system kube-proxy-2fvxm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h\n kube-system kube-scheduler-tkg-mgmt-vc-control-plane-4czbf 100m (1%) 0 (0%) 0 (0%) 0 (0%) 25h\n kube-system metrics-server-58bbfb986f-7q897 100m (1%) 0 (0%) 200Mi (0%) 0 (0%) 4h37m\n kube-system vsphere-cloud-controller-manager-9gc8w 200m (3%) 0 (0%) 0 (0%) 0 (0%) 25h\n kube-system vsphere-csi-controller-7d96796c4d-p276x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h\n kube-system vsphere-csi-node-ld676 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h\n sonobuoy sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2kxj9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52m\n tanzu-system secretgen-controller-6dd9c95967-hfpnj 120m (2%) 0 (0%) 100Mi (0%) 0 (0%) 74m\n tkg-system-networking ako-operator-controller-manager-79cb9ccfc8-lwlw6 100m (1%) 100m (1%) 100Mi (0%) 300Mi (1%) 74m\n tkg-system kapp-controller-5b7d886dcc-rg8d8 120m (2%) 0 (0%) 100Mi (0%) 0 (0%) 25h\n tkg-system tanzu-addons-controller-manager-667d5c846f-f78n7 100m (1%) 100m (1%) 40Mi (0%) 500Mi (2%) 25h\n tkg-system tanzu-capabilities-controller-manager-7864dcb4b7-9jhgh 100m (1%) 100m (1%) 20Mi (0%) 30Mi (0%) 74m\n tkg-system tanzu-featuregates-controller-manager-fb8cf8ffc-qptgc 100m (1%) 100m (1%) 20Mi (0%) 30Mi (0%) 74m\n tkr-system tkr-controller-manager-7c99874659-rqlgx 100m (1%) 100m (1%) 20Mi (0%) 100Mi (0%) 74m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 2590m (43%) 850m (14%)\n memory 1040Mi (4%) 1700Mi (7%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" +Apr 29 19:06:03.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7310 describe namespace kubectl-7310' +Apr 29 19:06:03.930: INFO: stderr: "" +Apr 29 19:06:03.930: INFO: stdout: "Name: kubectl-7310\nLabels: e2e-framework=kubectl\n e2e-run=75e00e5f-c4ea-4979-9e44-b3957b24b942\n kubernetes.io/metadata.name=kubectl-7310\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:06:03.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7310" for this suite. + +• [SLOW TEST:6.706 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl describe + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1094 + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":346,"completed":156,"skipped":2717,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:06:03.944: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Apr 29 19:06:04.271: INFO: Pod name wrapped-volume-race-5e4d9164-f7d2-498a-985c-e098e8b15523: Found 0 pods out of 5 +Apr 29 19:06:09.286: INFO: Pod name wrapped-volume-race-5e4d9164-f7d2-498a-985c-e098e8b15523: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-5e4d9164-f7d2-498a-985c-e098e8b15523 in namespace emptydir-wrapper-816, will wait for the garbage collector to delete the pods +Apr 29 19:06:19.481: INFO: Deleting ReplicationController wrapped-volume-race-5e4d9164-f7d2-498a-985c-e098e8b15523 took: 14.011675ms +Apr 29 19:06:19.683: INFO: Terminating ReplicationController wrapped-volume-race-5e4d9164-f7d2-498a-985c-e098e8b15523 pods took: 201.10639ms +STEP: Creating RC which spawns configmap-volume pods +Apr 29 19:06:23.607: INFO: Pod name wrapped-volume-race-846178c6-96b3-451a-9b19-ac4604e48376: Found 0 pods out of 5 +Apr 29 19:06:28.617: INFO: Pod name wrapped-volume-race-846178c6-96b3-451a-9b19-ac4604e48376: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-846178c6-96b3-451a-9b19-ac4604e48376 in namespace emptydir-wrapper-816, will wait for the garbage collector to delete the pods +Apr 29 19:06:38.733: INFO: Deleting ReplicationController wrapped-volume-race-846178c6-96b3-451a-9b19-ac4604e48376 took: 6.886939ms +Apr 29 19:06:38.833: INFO: Terminating ReplicationController wrapped-volume-race-846178c6-96b3-451a-9b19-ac4604e48376 pods took: 100.593186ms +STEP: Creating RC which spawns configmap-volume pods +Apr 29 19:06:42.560: INFO: Pod name wrapped-volume-race-a89be660-2cb0-46c5-8617-acb706892633: Found 0 pods out of 5 +Apr 29 19:06:47.572: INFO: Pod name wrapped-volume-race-a89be660-2cb0-46c5-8617-acb706892633: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-a89be660-2cb0-46c5-8617-acb706892633 in namespace emptydir-wrapper-816, will wait for the garbage collector to delete the pods +Apr 29 19:06:59.664: INFO: Deleting ReplicationController wrapped-volume-race-a89be660-2cb0-46c5-8617-acb706892633 took: 6.810288ms +Apr 29 19:06:59.765: INFO: Terminating ReplicationController wrapped-volume-race-a89be660-2cb0-46c5-8617-acb706892633 pods took: 100.877842ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:07:03.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-816" for this suite. + +• [SLOW TEST:59.838 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":157,"skipped":2718,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:07:03.784: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:07:04.096: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +Apr 29 19:07:06.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856024, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856024, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856024, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856024, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:07:09.134: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:07:09.139: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:07:12.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-1849" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 + +• [SLOW TEST:8.625 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":158,"skipped":2754,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:07:12.409: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Apr 29 19:07:22.520: INFO: The status of Pod kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf is Running (Ready = true) +Apr 29 19:07:22.773: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:07:22.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-6417" for this suite. + +• [SLOW TEST:10.376 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":159,"skipped":2755,"failed":0} +SSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:07:22.785: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:07:22.823: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Apr 29 19:07:22.837: INFO: The status of Pod pod-exec-websocket-b6a8578c-ed7f-46d2-a590-78cb9631aebc is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:07:24.843: INFO: The status of Pod pod-exec-websocket-b6a8578c-ed7f-46d2-a590-78cb9631aebc is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:07:24.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-7506" for this suite. +•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":160,"skipped":2759,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:07:24.966: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:07:25.360: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:07:28.389: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:07:40.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5000" for this suite. +STEP: Destroying namespace "webhook-5000-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:15.639 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":161,"skipped":2780,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:07:40.605: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-upd-70342cbe-a91f-4699-b841-0607cd8e1e2e +STEP: Creating the pod +Apr 29 19:07:40.674: INFO: The status of Pod pod-configmaps-0cfb9b0f-6c16-4d12-af99-5ac1c1dead17 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:07:42.681: INFO: The status of Pod pod-configmaps-0cfb9b0f-6c16-4d12-af99-5ac1c1dead17 is Running (Ready = true) +STEP: Updating configmap configmap-test-upd-70342cbe-a91f-4699-b841-0607cd8e1e2e +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:07:44.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6556" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":162,"skipped":2790,"failed":0} +S +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:07:44.741: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:07:44.789: INFO: Got root ca configmap in namespace "svcaccounts-6655" +Apr 29 19:07:44.794: INFO: Deleted root ca configmap in namespace "svcaccounts-6655" +STEP: waiting for a new root ca configmap created +Apr 29 19:07:45.302: INFO: Recreated root ca configmap in namespace "svcaccounts-6655" +Apr 29 19:07:45.308: INFO: Updated root ca configmap in namespace "svcaccounts-6655" +STEP: waiting for the root ca configmap reconciled +Apr 29 19:07:45.815: INFO: Reconciled root ca configmap in namespace "svcaccounts-6655" +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:07:45.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-6655" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":163,"skipped":2791,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:07:45.827: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-294 +[It] should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-294 +Apr 29 19:07:45.892: INFO: Found 0 stateful pods, waiting for 1 +Apr 29 19:07:55.898: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label +STEP: Getting /status +Apr 29 19:07:55.918: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status +Apr 29 19:07:55.927: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated +Apr 29 19:07:55.931: INFO: Observed &StatefulSet event: ADDED +Apr 29 19:07:55.931: INFO: Found Statefulset ss in namespace statefulset-294 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Apr 29 19:07:55.931: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status +Apr 29 19:07:55.931: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Apr 29 19:07:55.937: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched +Apr 29 19:07:55.940: INFO: Observed &StatefulSet event: ADDED +Apr 29 19:07:55.940: INFO: Observed Statefulset ss in namespace statefulset-294 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Apr 29 19:07:55.940: INFO: Observed &StatefulSet event: MODIFIED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Apr 29 19:07:55.940: INFO: Deleting all statefulset in ns statefulset-294 +Apr 29 19:07:55.943: INFO: Scaling statefulset ss to 0 +Apr 29 19:08:05.961: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 19:08:05.964: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:08:05.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-294" for this suite. + +• [SLOW TEST:20.171 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":164,"skipped":2802,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:08:05.998: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:08:06.334: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:08:09.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:08:09.374: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:08:12.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2526" for this suite. +STEP: Destroying namespace "webhook-2526-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:6.624 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":165,"skipped":2816,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:08:12.623: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Apr 29 19:08:12.677: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:08:14.685: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Apr 29 19:08:14.704: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:08:16.710: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Apr 29 19:08:16.725: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Apr 29 19:08:16.732: INFO: Pod pod-with-prestop-exec-hook still exists +Apr 29 19:08:18.732: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Apr 29 19:08:18.742: INFO: Pod pod-with-prestop-exec-hook still exists +Apr 29 19:08:20.733: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Apr 29 19:08:20.738: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:08:20.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-4936" for this suite. + +• [SLOW TEST:8.150 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":166,"skipped":2858,"failed":0} +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:08:20.774: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename taint-single-pod +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Apr 29 19:08:20.817: INFO: Waiting up to 1m0s for all nodes to be ready +Apr 29 19:09:20.878: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:09:20.886: INFO: Starting informer... +STEP: Starting pod... +Apr 29 19:09:21.110: INFO: Pod is running on tkg-mgmt-vc-md-0-59d8b7c778-msxpc. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Apr 29 19:09:21.143: INFO: Pod wasn't evicted. Proceeding +Apr 29 19:09:21.143: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Apr 29 19:10:36.173: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:10:36.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-1107" for this suite. + +• [SLOW TEST:135.425 seconds] +[sig-node] NoExecuteTaintManager Single Pod [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":167,"skipped":2858,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:10:36.201: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's command +Apr 29 19:10:36.265: INFO: Waiting up to 5m0s for pod "var-expansion-c5d05183-7d24-4b52-909b-c956b4001720" in namespace "var-expansion-5007" to be "Succeeded or Failed" +Apr 29 19:10:36.272: INFO: Pod "var-expansion-c5d05183-7d24-4b52-909b-c956b4001720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583495ms +Apr 29 19:10:38.278: INFO: Pod "var-expansion-c5d05183-7d24-4b52-909b-c956b4001720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013274178s +STEP: Saw pod success +Apr 29 19:10:38.278: INFO: Pod "var-expansion-c5d05183-7d24-4b52-909b-c956b4001720" satisfied condition "Succeeded or Failed" +Apr 29 19:10:38.282: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod var-expansion-c5d05183-7d24-4b52-909b-c956b4001720 container dapi-container: +STEP: delete the pod +Apr 29 19:10:38.311: INFO: Waiting for pod var-expansion-c5d05183-7d24-4b52-909b-c956b4001720 to disappear +Apr 29 19:10:38.316: INFO: Pod var-expansion-c5d05183-7d24-4b52-909b-c956b4001720 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:10:38.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5007" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":168,"skipped":2870,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:10:38.332: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap that has name configmap-test-emptyKey-94aebc07-713e-4fb6-8744-152b22a986ec +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:10:38.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1483" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":169,"skipped":2921,"failed":0} + +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:10:38.385: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1396 +STEP: creating an pod +Apr 29 19:10:38.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-3320 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Apr 29 19:10:38.523: INFO: stderr: "" +Apr 29 19:10:38.523: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for log generator to start. +Apr 29 19:10:38.523: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Apr 29 19:10:38.523: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3320" to be "running and ready, or succeeded" +Apr 29 19:10:38.527: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124548ms +Apr 29 19:10:40.535: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.011963044s +Apr 29 19:10:40.535: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Apr 29 19:10:40.535: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Apr 29 19:10:40.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-3320 logs logs-generator logs-generator' +Apr 29 19:10:40.618: INFO: stderr: "" +Apr 29 19:10:40.618: INFO: stdout: "I0429 19:10:39.638269 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/8gz 339\nI0429 19:10:39.839021 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/nbc5 216\nI0429 19:10:40.038820 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/r95 352\nI0429 19:10:40.238263 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/kl2 312\nI0429 19:10:40.438843 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/b77b 291\n" +STEP: limiting log lines +Apr 29 19:10:40.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-3320 logs logs-generator logs-generator --tail=1' +Apr 29 19:10:40.700: INFO: stderr: "" +Apr 29 19:10:40.700: INFO: stdout: "I0429 19:10:40.638266 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nxv 331\n" +Apr 29 19:10:40.700: INFO: got output "I0429 19:10:40.638266 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nxv 331\n" +STEP: limiting log bytes +Apr 29 19:10:40.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-3320 logs logs-generator logs-generator --limit-bytes=1' +Apr 29 19:10:40.782: INFO: stderr: "" +Apr 29 19:10:40.782: INFO: stdout: "I" +Apr 29 19:10:40.782: INFO: got output "I" +STEP: exposing timestamps +Apr 29 19:10:40.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-3320 logs logs-generator logs-generator --tail=1 --timestamps' +Apr 29 19:10:40.862: INFO: stderr: "" +Apr 29 19:10:40.862: INFO: stdout: "2022-04-29T19:10:40.838995825Z I0429 19:10:40.838794 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/rkww 561\n" +Apr 29 19:10:40.862: INFO: got output "2022-04-29T19:10:40.838995825Z I0429 19:10:40.838794 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/rkww 561\n" +STEP: restricting to a time range +Apr 29 19:10:43.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-3320 logs logs-generator logs-generator --since=1s' +Apr 29 19:10:43.449: INFO: stderr: "" +Apr 29 19:10:43.449: INFO: stdout: "I0429 19:10:42.638677 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/slw 270\nI0429 19:10:42.838091 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/ch6 378\nI0429 19:10:43.038670 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/4zk 273\nI0429 19:10:43.239066 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/bwwk 358\nI0429 19:10:43.438482 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/thmb 380\n" +Apr 29 19:10:43.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-3320 logs logs-generator logs-generator --since=24h' +Apr 29 19:10:43.536: INFO: stderr: "" +Apr 29 19:10:43.536: INFO: stdout: "I0429 19:10:39.638269 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/8gz 339\nI0429 19:10:39.839021 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/nbc5 216\nI0429 19:10:40.038820 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/r95 352\nI0429 19:10:40.238263 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/kl2 312\nI0429 19:10:40.438843 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/b77b 291\nI0429 19:10:40.638266 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nxv 331\nI0429 19:10:40.838794 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/rkww 561\nI0429 19:10:41.038185 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/s48j 217\nI0429 19:10:41.238677 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/xz8k 550\nI0429 19:10:41.438089 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/lhqp 315\nI0429 19:10:41.638416 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/2xl5 307\nI0429 19:10:41.838725 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/thd7 217\nI0429 19:10:42.038110 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/q28 385\nI0429 19:10:42.238622 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/hhs 303\nI0429 19:10:42.439105 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/gr2w 326\nI0429 19:10:42.638677 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/slw 270\nI0429 19:10:42.838091 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/ch6 378\nI0429 19:10:43.038670 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/4zk 273\nI0429 19:10:43.239066 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/bwwk 358\nI0429 19:10:43.438482 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/thmb 380\n" +[AfterEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1401 +Apr 29 19:10:43.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-3320 delete pod logs-generator' +Apr 29 19:10:44.388: INFO: stderr: "" +Apr 29 19:10:44.388: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:10:44.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3320" for this suite. + +• [SLOW TEST:6.016 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":346,"completed":170,"skipped":2921,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:10:44.402: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Apr 29 19:10:44.469: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1378 48eaa565-5e94-4772-8389-91e947d4dc77 743261 0 2022-04-29 19:10:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 19:10:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:10:44.470: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1378 48eaa565-5e94-4772-8389-91e947d4dc77 743262 0 2022-04-29 19:10:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 19:10:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:10:44.470: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1378 48eaa565-5e94-4772-8389-91e947d4dc77 743263 0 2022-04-29 19:10:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 19:10:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Apr 29 19:10:54.528: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1378 48eaa565-5e94-4772-8389-91e947d4dc77 743347 0 2022-04-29 19:10:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 19:10:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:10:54.529: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1378 48eaa565-5e94-4772-8389-91e947d4dc77 743349 0 2022-04-29 19:10:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 19:10:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:10:54.529: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1378 48eaa565-5e94-4772-8389-91e947d4dc77 743350 0 2022-04-29 19:10:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-29 19:10:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:10:54.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1378" for this suite. + +• [SLOW TEST:10.141 seconds] +[sig-api-machinery] Watchers +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":171,"skipped":2930,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:10:54.545: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Apr 29 19:10:56.643: INFO: running pods: 0 < 3 +Apr 29 19:10:58.651: INFO: running pods: 2 < 3 +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:11:00.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2709" for this suite. + +• [SLOW TEST:6.123 seconds] +[sig-apps] DisruptionController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":172,"skipped":2955,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:11:00.668: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Apr 29 19:11:00.737: INFO: Waiting up to 5m0s for pod "pod-3c191b3d-f1dc-4d01-b398-f5d66184ad74" in namespace "emptydir-718" to be "Succeeded or Failed" +Apr 29 19:11:00.742: INFO: Pod "pod-3c191b3d-f1dc-4d01-b398-f5d66184ad74": Phase="Pending", Reason="", readiness=false. Elapsed: 5.099006ms +Apr 29 19:11:02.748: INFO: Pod "pod-3c191b3d-f1dc-4d01-b398-f5d66184ad74": Phase="Running", Reason="", readiness=true. Elapsed: 2.010824646s +Apr 29 19:11:04.756: INFO: Pod "pod-3c191b3d-f1dc-4d01-b398-f5d66184ad74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019169356s +STEP: Saw pod success +Apr 29 19:11:04.756: INFO: Pod "pod-3c191b3d-f1dc-4d01-b398-f5d66184ad74" satisfied condition "Succeeded or Failed" +Apr 29 19:11:04.762: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-3c191b3d-f1dc-4d01-b398-f5d66184ad74 container test-container: +STEP: delete the pod +Apr 29 19:11:04.783: INFO: Waiting for pod pod-3c191b3d-f1dc-4d01-b398-f5d66184ad74 to disappear +Apr 29 19:11:04.789: INFO: Pod pod-3c191b3d-f1dc-4d01-b398-f5d66184ad74 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:11:04.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-718" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":173,"skipped":2956,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:11:04.804: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Apr 29 19:11:04.846: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:11:12.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-4863" for this suite. + +• [SLOW TEST:7.429 seconds] +[sig-node] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":174,"skipped":2965,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:11:12.235: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-e69e976d-15e5-491c-9ce6-be5bac4b19b2 +STEP: Creating a pod to test consume secrets +Apr 29 19:11:12.349: INFO: Waiting up to 5m0s for pod "pod-secrets-1882bba2-aeb9-4831-9940-073ad526f1a5" in namespace "secrets-2337" to be "Succeeded or Failed" +Apr 29 19:11:12.355: INFO: Pod "pod-secrets-1882bba2-aeb9-4831-9940-073ad526f1a5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.386616ms +Apr 29 19:11:14.364: INFO: Pod "pod-secrets-1882bba2-aeb9-4831-9940-073ad526f1a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014027453s +STEP: Saw pod success +Apr 29 19:11:14.364: INFO: Pod "pod-secrets-1882bba2-aeb9-4831-9940-073ad526f1a5" satisfied condition "Succeeded or Failed" +Apr 29 19:11:14.369: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-secrets-1882bba2-aeb9-4831-9940-073ad526f1a5 container secret-volume-test: +STEP: delete the pod +Apr 29 19:11:14.407: INFO: Waiting for pod pod-secrets-1882bba2-aeb9-4831-9940-073ad526f1a5 to disappear +Apr 29 19:11:14.412: INFO: Pod pod-secrets-1882bba2-aeb9-4831-9940-073ad526f1a5 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:11:14.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2337" for this suite. +STEP: Destroying namespace "secret-namespace-3494" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":175,"skipped":2992,"failed":0} + +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:11:14.432: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-6641 +STEP: creating service affinity-nodeport-transition in namespace services-6641 +STEP: creating replication controller affinity-nodeport-transition in namespace services-6641 +I0429 19:11:14.512464 25 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6641, replica count: 3 +I0429 19:11:17.564813 25 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0429 19:11:20.566680 25 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 19:11:20.586: INFO: Creating new exec pod +Apr 29 19:11:23.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6641 exec execpod-affinitybrjsf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Apr 29 19:11:23.864: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Apr 29 19:11:23.864: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:11:23.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6641 exec execpod-affinitybrjsf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.69.58.101 80' +Apr 29 19:11:24.095: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.69.58.101 80\nConnection to 100.69.58.101 80 port [tcp/http] succeeded!\n" +Apr 29 19:11:24.095: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:11:24.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6641 exec execpod-affinitybrjsf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.111.35 31387' +Apr 29 19:11:24.300: INFO: stderr: "+ + echo hostName\nnc -v -t -w 2 10.180.111.35 31387\nConnection to 10.180.111.35 31387 port [tcp/*] succeeded!\n" +Apr 29 19:11:24.300: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:11:24.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6641 exec execpod-affinitybrjsf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.99.66 31387' +Apr 29 19:11:24.479: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.99.66 31387\nConnection to 10.180.99.66 31387 port [tcp/*] succeeded!\n" +Apr 29 19:11:24.479: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:11:24.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6641 exec execpod-affinitybrjsf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.180.111.35:31387/ ; done' +Apr 29 19:11:24.796: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n" +Apr 29 19:11:24.796: INFO: stdout: "\naffinity-nodeport-transition-cp55v\naffinity-nodeport-transition-q466n\naffinity-nodeport-transition-q466n\naffinity-nodeport-transition-q466n\naffinity-nodeport-transition-cp55v\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-cp55v\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-q466n\naffinity-nodeport-transition-q466n\naffinity-nodeport-transition-cp55v\naffinity-nodeport-transition-cp55v\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-cp55v\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-cp55v" +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-cp55v +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-q466n +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-q466n +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-q466n +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-cp55v +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-cp55v +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-q466n +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-q466n +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-cp55v +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-cp55v +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-cp55v +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:24.796: INFO: Received response from host: affinity-nodeport-transition-cp55v +Apr 29 19:11:24.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6641 exec execpod-affinitybrjsf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.180.111.35:31387/ ; done' +Apr 29 19:11:25.124: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31387/\n" +Apr 29 19:11:25.124: INFO: stdout: "\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8\naffinity-nodeport-transition-4n2n8" +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Received response from host: affinity-nodeport-transition-4n2n8 +Apr 29 19:11:25.124: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6641, will wait for the garbage collector to delete the pods +Apr 29 19:11:25.196: INFO: Deleting ReplicationController affinity-nodeport-transition took: 7.249657ms +Apr 29 19:11:25.297: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.921599ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:11:27.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6641" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:13.001 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":176,"skipped":2992,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:11:27.434: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name cm-test-opt-del-74a097a1-46df-41c9-b3fb-b8ff3555250c +STEP: Creating configMap with name cm-test-opt-upd-e3ff9794-3e95-447e-8387-353e709667cd +STEP: Creating the pod +Apr 29 19:11:27.495: INFO: The status of Pod pod-configmaps-770d2244-8de5-4dec-a7e0-281f91c6c73a is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:11:29.502: INFO: The status of Pod pod-configmaps-770d2244-8de5-4dec-a7e0-281f91c6c73a is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:11:31.501: INFO: The status of Pod pod-configmaps-770d2244-8de5-4dec-a7e0-281f91c6c73a is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-74a097a1-46df-41c9-b3fb-b8ff3555250c +STEP: Updating configmap cm-test-opt-upd-e3ff9794-3e95-447e-8387-353e709667cd +STEP: Creating configMap with name cm-test-opt-create-a1f9222b-1a26-4ea9-96fe-5f98c89647c0 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:13:00.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1281" for this suite. + +• [SLOW TEST:93.549 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":177,"skipped":3021,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:13:00.984: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4286.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4286.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4286.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Apr 29 19:13:05.143: INFO: DNS probes using dns-4286/dns-test-26fb11b1-367d-4f0a-bb60-8095acc3e7be succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:13:05.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-4286" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":178,"skipped":3037,"failed":0} +SS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:13:05.218: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-secret-xgtv +STEP: Creating a pod to test atomic-volume-subpath +Apr 29 19:13:05.276: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xgtv" in namespace "subpath-6839" to be "Succeeded or Failed" +Apr 29 19:13:05.281: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.844229ms +Apr 29 19:13:07.286: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 2.009821416s +Apr 29 19:13:09.292: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 4.015450619s +Apr 29 19:13:11.297: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 6.02033813s +Apr 29 19:13:13.307: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 8.030410094s +Apr 29 19:13:15.316: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 10.039262327s +Apr 29 19:13:17.324: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 12.047697706s +Apr 29 19:13:19.330: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 14.053206779s +Apr 29 19:13:21.337: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 16.060865536s +Apr 29 19:13:23.344: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 18.06794306s +Apr 29 19:13:25.350: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 20.073669963s +Apr 29 19:13:27.358: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Running", Reason="", readiness=true. Elapsed: 22.081892628s +Apr 29 19:13:29.365: INFO: Pod "pod-subpath-test-secret-xgtv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088268169s +STEP: Saw pod success +Apr 29 19:13:29.365: INFO: Pod "pod-subpath-test-secret-xgtv" satisfied condition "Succeeded or Failed" +Apr 29 19:13:29.370: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-subpath-test-secret-xgtv container test-container-subpath-secret-xgtv: +STEP: delete the pod +Apr 29 19:13:29.400: INFO: Waiting for pod pod-subpath-test-secret-xgtv to disappear +Apr 29 19:13:29.405: INFO: Pod pod-subpath-test-secret-xgtv no longer exists +STEP: Deleting pod pod-subpath-test-secret-xgtv +Apr 29 19:13:29.405: INFO: Deleting pod "pod-subpath-test-secret-xgtv" in namespace "subpath-6839" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:13:29.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-6839" for this suite. + +• [SLOW TEST:24.213 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with secret pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":346,"completed":179,"skipped":3039,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:13:29.432: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod liveness-f7088ddc-4ca2-49ac-ab22-efb2d14739ed in namespace container-probe-3113 +Apr 29 19:13:31.516: INFO: Started pod liveness-f7088ddc-4ca2-49ac-ab22-efb2d14739ed in namespace container-probe-3113 +STEP: checking the pod's current state and verifying that restartCount is present +Apr 29 19:13:31.522: INFO: Initial restart count of pod liveness-f7088ddc-4ca2-49ac-ab22-efb2d14739ed is 0 +Apr 29 19:13:51.608: INFO: Restart count of pod container-probe-3113/liveness-f7088ddc-4ca2-49ac-ab22-efb2d14739ed is now 1 (20.086063647s elapsed) +Apr 29 19:14:11.689: INFO: Restart count of pod container-probe-3113/liveness-f7088ddc-4ca2-49ac-ab22-efb2d14739ed is now 2 (40.166874953s elapsed) +Apr 29 19:14:31.761: INFO: Restart count of pod container-probe-3113/liveness-f7088ddc-4ca2-49ac-ab22-efb2d14739ed is now 3 (1m0.238509875s elapsed) +Apr 29 19:14:51.822: INFO: Restart count of pod container-probe-3113/liveness-f7088ddc-4ca2-49ac-ab22-efb2d14739ed is now 4 (1m20.299375652s elapsed) +Apr 29 19:16:04.049: INFO: Restart count of pod container-probe-3113/liveness-f7088ddc-4ca2-49ac-ab22-efb2d14739ed is now 5 (2m32.526454721s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:04.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-3113" for this suite. + +• [SLOW TEST:154.648 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":180,"skipped":3047,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:04.083: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:16:04.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-141af065-4507-4838-8993-28c76ebdea32" in namespace "downward-api-506" to be "Succeeded or Failed" +Apr 29 19:16:04.151: INFO: Pod "downwardapi-volume-141af065-4507-4838-8993-28c76ebdea32": Phase="Pending", Reason="", readiness=false. Elapsed: 5.335212ms +Apr 29 19:16:06.158: INFO: Pod "downwardapi-volume-141af065-4507-4838-8993-28c76ebdea32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011604332s +Apr 29 19:16:08.166: INFO: Pod "downwardapi-volume-141af065-4507-4838-8993-28c76ebdea32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019568022s +STEP: Saw pod success +Apr 29 19:16:08.166: INFO: Pod "downwardapi-volume-141af065-4507-4838-8993-28c76ebdea32" satisfied condition "Succeeded or Failed" +Apr 29 19:16:08.170: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-141af065-4507-4838-8993-28c76ebdea32 container client-container: +STEP: delete the pod +Apr 29 19:16:08.200: INFO: Waiting for pod downwardapi-volume-141af065-4507-4838-8993-28c76ebdea32 to disappear +Apr 29 19:16:08.210: INFO: Pod downwardapi-volume-141af065-4507-4838-8993-28c76ebdea32 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:08.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-506" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":181,"skipped":3066,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:08.228: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:14.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-6955" for this suite. +STEP: Destroying namespace "nsdeletetest-8466" for this suite. +Apr 29 19:16:14.382: INFO: Namespace nsdeletetest-8466 was already deleted +STEP: Destroying namespace "nsdeletetest-4703" for this suite. + +• [SLOW TEST:6.160 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":182,"skipped":3089,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:14.388: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Apr 29 19:16:14.437: INFO: Waiting up to 5m0s for pod "pod-8ac80a18-2231-45ce-891b-024de871d459" in namespace "emptydir-8641" to be "Succeeded or Failed" +Apr 29 19:16:14.441: INFO: Pod "pod-8ac80a18-2231-45ce-891b-024de871d459": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096878ms +Apr 29 19:16:16.447: INFO: Pod "pod-8ac80a18-2231-45ce-891b-024de871d459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009791483s +STEP: Saw pod success +Apr 29 19:16:16.447: INFO: Pod "pod-8ac80a18-2231-45ce-891b-024de871d459" satisfied condition "Succeeded or Failed" +Apr 29 19:16:16.453: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-8ac80a18-2231-45ce-891b-024de871d459 container test-container: +STEP: delete the pod +Apr 29 19:16:16.478: INFO: Waiting for pod pod-8ac80a18-2231-45ce-891b-024de871d459 to disappear +Apr 29 19:16:16.482: INFO: Pod pod-8ac80a18-2231-45ce-891b-024de871d459 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:16.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8641" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":183,"skipped":3101,"failed":0} +SS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:16.494: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Apr 29 19:16:16.531: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Apr 29 19:16:16.544: INFO: Waiting for terminating namespaces to be deleted... +Apr 29 19:16:16.550: INFO: +Logging pods the apiserver thinks is on node tkg-mgmt-vc-control-plane-4czbf before test +Apr 29 19:16:16.570: INFO: ako-0 from avi-system started at 2022-04-29 17:51:41 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container ako-tkg-system-tkg-mgmt-vc ready: true, restart count 0 +Apr 29 19:16:16.570: INFO: capi-kubeadm-bootstrap-controller-manager-7ffb6dc8fc-8l5kl from capi-kubeadm-bootstrap-system started at 2022-04-29 01:35:11 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container manager ready: true, restart count 12 +Apr 29 19:16:16.570: INFO: capi-kubeadm-control-plane-controller-manager-667999fdb8-twv4s from capi-kubeadm-control-plane-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container manager ready: true, restart count 2 +Apr 29 19:16:16.570: INFO: capi-controller-manager-65c5769c4c-555gx from capi-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container manager ready: true, restart count 15 +Apr 29 19:16:16.570: INFO: capv-controller-manager-75bdbfb7dc-888vj from capv-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container manager ready: true, restart count 15 +Apr 29 19:16:16.570: INFO: cert-manager-cainjector-cc485fcdc-4qq4t from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container cert-manager ready: true, restart count 7 +Apr 29 19:16:16.570: INFO: cert-manager-d6b468546-pctjx from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container cert-manager ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: cert-manager-webhook-dd697458d-c6xrg from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container cert-manager ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: antrea-agent-k79rx from kube-system started at 2022-04-28 17:17:44 +0000 UTC (2 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container antrea-agent ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: Container antrea-ovs ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: antrea-controller-f84fc8fd6-clc5q from kube-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container antrea-controller ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: coredns-67c8559bb6-7k2mz from kube-system started at 2022-04-28 17:12:06 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container coredns ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: coredns-67c8559bb6-bgthp from kube-system started at 2022-04-28 17:12:06 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container coredns ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: etcd-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container etcd ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: kube-apiserver-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container kube-apiserver ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container kube-controller-manager ready: true, restart count 18 +Apr 29 19:16:16.570: INFO: kube-proxy-2fvxm from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container kube-proxy ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: kube-scheduler-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-29 16:30:09 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container kube-scheduler ready: true, restart count 18 +Apr 29 19:16:16.570: INFO: metrics-server-58bbfb986f-7q897 from kube-system started at 2022-04-29 14:28:58 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container metrics-server ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: vsphere-cloud-controller-manager-9gc8w from kube-system started at 2022-04-28 17:16:39 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 19 +Apr 29 19:16:16.570: INFO: vsphere-csi-controller-7d96796c4d-p276x from kube-system started at 2022-04-28 17:16:08 +0000 UTC (5 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container csi-attacher ready: true, restart count 21 +Apr 29 19:16:16.570: INFO: Container csi-provisioner ready: true, restart count 22 +Apr 29 19:16:16.570: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: Container vsphere-csi-controller ready: true, restart count 2 +Apr 29 19:16:16.570: INFO: Container vsphere-syncer ready: true, restart count 18 +Apr 29 19:16:16.570: INFO: vsphere-csi-node-ld676 from kube-system started at 2022-04-28 17:16:08 +0000 UTC (3 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: Container node-driver-registrar ready: true, restart count 2 +Apr 29 19:16:16.570: INFO: Container vsphere-csi-node ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2kxj9 from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 19:16:16.570: INFO: Container systemd-logs ready: false, restart count 0 +Apr 29 19:16:16.570: INFO: secretgen-controller-6dd9c95967-hfpnj from tanzu-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container secretgen-controller ready: true, restart count 0 +Apr 29 19:16:16.570: INFO: ako-operator-controller-manager-79cb9ccfc8-lwlw6 from tkg-system-networking started at 2022-04-29 17:51:37 +0000 UTC (2 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Apr 29 19:16:16.570: INFO: Container manager ready: true, restart count 0 +Apr 29 19:16:16.570: INFO: kapp-controller-5b7d886dcc-rg8d8 from tkg-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container kapp-controller ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: tanzu-addons-controller-manager-667d5c846f-f78n7 from tkg-system started at 2022-04-28 17:13:30 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container tanzu-addons-controller ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: tanzu-capabilities-controller-manager-7864dcb4b7-9jhgh from tkg-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container manager ready: true, restart count 0 +Apr 29 19:16:16.570: INFO: tanzu-featuregates-controller-manager-fb8cf8ffc-qptgc from tkg-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container manager ready: true, restart count 0 +Apr 29 19:16:16.570: INFO: tkr-controller-manager-7c99874659-rqlgx from tkr-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.570: INFO: Container manager ready: true, restart count 1 +Apr 29 19:16:16.570: INFO: +Logging pods the apiserver thinks is on node tkg-mgmt-vc-md-0-59d8b7c778-msxpc before test +Apr 29 19:16:16.584: INFO: antrea-agent-jmd5f from kube-system started at 2022-04-28 17:17:22 +0000 UTC (2 container statuses recorded) +Apr 29 19:16:16.584: INFO: Container antrea-agent ready: true, restart count 1 +Apr 29 19:16:16.584: INFO: Container antrea-ovs ready: true, restart count 1 +Apr 29 19:16:16.584: INFO: kube-proxy-gqrhv from kube-system started at 2022-04-28 17:12:43 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.584: INFO: Container kube-proxy ready: true, restart count 1 +Apr 29 19:16:16.584: INFO: vsphere-csi-node-fxcc9 from kube-system started at 2022-04-28 17:16:08 +0000 UTC (3 container statuses recorded) +Apr 29 19:16:16.584: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 19:16:16.584: INFO: Container node-driver-registrar ready: true, restart count 4 +Apr 29 19:16:16.584: INFO: Container vsphere-csi-node ready: true, restart count 1 +Apr 29 19:16:16.584: INFO: sonobuoy from sonobuoy started at 2022-04-29 18:13:55 +0000 UTC (1 container statuses recorded) +Apr 29 19:16:16.584: INFO: Container kube-sonobuoy ready: true, restart count 0 +Apr 29 19:16:16.584: INFO: sonobuoy-e2e-job-d928f42f9304448b from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 19:16:16.584: INFO: Container e2e ready: true, restart count 0 +Apr 29 19:16:16.584: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 19:16:16.584: INFO: sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2lph2 from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 19:16:16.584: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 19:16:16.584: INFO: Container systemd-logs ready: false, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: verifying the node has the label node tkg-mgmt-vc-control-plane-4czbf +STEP: verifying the node has the label node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +Apr 29 19:16:16.662: INFO: Pod ako-0 requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.662: INFO: Pod capi-kubeadm-bootstrap-controller-manager-7ffb6dc8fc-8l5kl requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.662: INFO: Pod capi-kubeadm-control-plane-controller-manager-667999fdb8-twv4s requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.662: INFO: Pod capi-controller-manager-65c5769c4c-555gx requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.662: INFO: Pod capv-controller-manager-75bdbfb7dc-888vj requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.662: INFO: Pod cert-manager-cainjector-cc485fcdc-4qq4t requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.662: INFO: Pod cert-manager-d6b468546-pctjx requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.662: INFO: Pod cert-manager-webhook-dd697458d-c6xrg requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.662: INFO: Pod antrea-agent-jmd5f requesting resource cpu=400m on Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +Apr 29 19:16:16.662: INFO: Pod antrea-agent-k79rx requesting resource cpu=400m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.662: INFO: Pod antrea-controller-f84fc8fd6-clc5q requesting resource cpu=200m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod coredns-67c8559bb6-7k2mz requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod coredns-67c8559bb6-bgthp requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod etcd-tkg-mgmt-vc-control-plane-4czbf requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod kube-apiserver-tkg-mgmt-vc-control-plane-4czbf requesting resource cpu=250m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf requesting resource cpu=200m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod kube-proxy-2fvxm requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod kube-proxy-gqrhv requesting resource cpu=0m on Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +Apr 29 19:16:16.663: INFO: Pod kube-scheduler-tkg-mgmt-vc-control-plane-4czbf requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod metrics-server-58bbfb986f-7q897 requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod vsphere-cloud-controller-manager-9gc8w requesting resource cpu=200m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod vsphere-csi-controller-7d96796c4d-p276x requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod vsphere-csi-node-fxcc9 requesting resource cpu=0m on Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +Apr 29 19:16:16.663: INFO: Pod vsphere-csi-node-ld676 requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod sonobuoy requesting resource cpu=0m on Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +Apr 29 19:16:16.663: INFO: Pod sonobuoy-e2e-job-d928f42f9304448b requesting resource cpu=0m on Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +Apr 29 19:16:16.663: INFO: Pod sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2kxj9 requesting resource cpu=0m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2lph2 requesting resource cpu=0m on Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +Apr 29 19:16:16.663: INFO: Pod secretgen-controller-6dd9c95967-hfpnj requesting resource cpu=120m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod ako-operator-controller-manager-79cb9ccfc8-lwlw6 requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod kapp-controller-5b7d886dcc-rg8d8 requesting resource cpu=120m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod tanzu-addons-controller-manager-667d5c846f-f78n7 requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod tanzu-capabilities-controller-manager-7864dcb4b7-9jhgh requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod tanzu-featuregates-controller-manager-fb8cf8ffc-qptgc requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.663: INFO: Pod tkr-controller-manager-7c99874659-rqlgx requesting resource cpu=100m on Node tkg-mgmt-vc-control-plane-4czbf +STEP: Starting Pods to consume most of the cluster CPU. +Apr 29 19:16:16.663: INFO: Creating a pod which consumes cpu=2387m on Node tkg-mgmt-vc-control-plane-4czbf +Apr 29 19:16:16.672: INFO: Creating a pod which consumes cpu=3920m on Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-a54e08b7-7dc1-40a8-b611-ae2a623668a2.16ea7404a5703390], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4907/filler-pod-a54e08b7-7dc1-40a8-b611-ae2a623668a2 to tkg-mgmt-vc-md-0-59d8b7c778-msxpc] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-a54e08b7-7dc1-40a8-b611-ae2a623668a2.16ea7404d98f5b26], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-a54e08b7-7dc1-40a8-b611-ae2a623668a2.16ea7404df63f54d], Reason = [Created], Message = [Created container filler-pod-a54e08b7-7dc1-40a8-b611-ae2a623668a2] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-a54e08b7-7dc1-40a8-b611-ae2a623668a2.16ea7404eb1bb36c], Reason = [Started], Message = [Started container filler-pod-a54e08b7-7dc1-40a8-b611-ae2a623668a2] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b5ea8574-8a63-4d50-a5a7-91af52ced87d.16ea7404a4cac889], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4907/filler-pod-b5ea8574-8a63-4d50-a5a7-91af52ced87d to tkg-mgmt-vc-control-plane-4czbf] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b5ea8574-8a63-4d50-a5a7-91af52ced87d.16ea7404e2887856], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.5" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b5ea8574-8a63-4d50-a5a7-91af52ced87d.16ea7404f390f294], Reason = [Created], Message = [Created container filler-pod-b5ea8574-8a63-4d50-a5a7-91af52ced87d] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-b5ea8574-8a63-4d50-a5a7-91af52ced87d.16ea7404fe96c08c], Reason = [Started], Message = [Started container filler-pod-b5ea8574-8a63-4d50-a5a7-91af52ced87d] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.16ea74051edc83c7], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] +STEP: removing the label node off the node tkg-mgmt-vc-control-plane-4czbf +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:19.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-4907" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":346,"completed":184,"skipped":3103,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:19.797: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Apr 29 19:16:21.859: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:21.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-8254" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":185,"skipped":3116,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:21.885: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating all guestbook components +Apr 29 19:16:21.918: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Apr 29 19:16:21.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 create -f -' +Apr 29 19:16:25.345: INFO: stderr: "" +Apr 29 19:16:25.345: INFO: stdout: "service/agnhost-replica created\n" +Apr 29 19:16:25.345: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Apr 29 19:16:25.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 create -f -' +Apr 29 19:16:25.721: INFO: stderr: "" +Apr 29 19:16:25.721: INFO: stdout: "service/agnhost-primary created\n" +Apr 29 19:16:25.722: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Apr 29 19:16:25.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 create -f -' +Apr 29 19:16:26.163: INFO: stderr: "" +Apr 29 19:16:26.163: INFO: stdout: "service/frontend created\n" +Apr 29 19:16:26.164: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Apr 29 19:16:26.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 create -f -' +Apr 29 19:16:26.579: INFO: stderr: "" +Apr 29 19:16:26.579: INFO: stdout: "deployment.apps/frontend created\n" +Apr 29 19:16:26.580: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Apr 29 19:16:26.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 create -f -' +Apr 29 19:16:26.960: INFO: stderr: "" +Apr 29 19:16:26.960: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Apr 29 19:16:26.961: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.32 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Apr 29 19:16:26.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 create -f -' +Apr 29 19:16:27.430: INFO: stderr: "" +Apr 29 19:16:27.430: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Apr 29 19:16:27.430: INFO: Waiting for all frontend pods to be Running. +Apr 29 19:16:32.482: INFO: Waiting for frontend to serve content. +Apr 29 19:16:32.504: INFO: Trying to add a new entry to the guestbook. +Apr 29 19:16:32.521: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Apr 29 19:16:32.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 delete --grace-period=0 --force -f -' +Apr 29 19:16:32.633: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Apr 29 19:16:32.634: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Apr 29 19:16:32.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 delete --grace-period=0 --force -f -' +Apr 29 19:16:32.728: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Apr 29 19:16:32.728: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Apr 29 19:16:32.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 delete --grace-period=0 --force -f -' +Apr 29 19:16:32.821: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Apr 29 19:16:32.821: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Apr 29 19:16:32.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 delete --grace-period=0 --force -f -' +Apr 29 19:16:32.912: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Apr 29 19:16:32.912: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Apr 29 19:16:32.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 delete --grace-period=0 --force -f -' +Apr 29 19:16:33.000: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Apr 29 19:16:33.000: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Apr 29 19:16:33.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7379 delete --grace-period=0 --force -f -' +Apr 29 19:16:33.110: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Apr 29 19:16:33.110: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:33.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7379" for this suite. + +• [SLOW TEST:11.241 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Guestbook application + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339 + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":346,"completed":186,"skipped":3146,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:33.127: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Apr 29 19:16:36.200: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:36.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-1249" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":187,"skipped":3176,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:36.234: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap configmap-275/configmap-test-1731317f-f707-48a3-a873-b280421b7af4 +STEP: Creating a pod to test consume configMaps +Apr 29 19:16:36.289: INFO: Waiting up to 5m0s for pod "pod-configmaps-b36f3a08-7ca9-4687-81ae-cb58b0819b20" in namespace "configmap-275" to be "Succeeded or Failed" +Apr 29 19:16:36.296: INFO: Pod "pod-configmaps-b36f3a08-7ca9-4687-81ae-cb58b0819b20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.819356ms +Apr 29 19:16:38.302: INFO: Pod "pod-configmaps-b36f3a08-7ca9-4687-81ae-cb58b0819b20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012646862s +Apr 29 19:16:40.308: INFO: Pod "pod-configmaps-b36f3a08-7ca9-4687-81ae-cb58b0819b20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018895276s +STEP: Saw pod success +Apr 29 19:16:40.308: INFO: Pod "pod-configmaps-b36f3a08-7ca9-4687-81ae-cb58b0819b20" satisfied condition "Succeeded or Failed" +Apr 29 19:16:40.313: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-b36f3a08-7ca9-4687-81ae-cb58b0819b20 container env-test: +STEP: delete the pod +Apr 29 19:16:40.336: INFO: Waiting for pod pod-configmaps-b36f3a08-7ca9-4687-81ae-cb58b0819b20 to disappear +Apr 29 19:16:40.340: INFO: Pod pod-configmaps-b36f3a08-7ca9-4687-81ae-cb58b0819b20 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:40.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-275" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":188,"skipped":3193,"failed":0} +SSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:40.356: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create deployment with httpd image +Apr 29 19:16:40.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-6720 create -f -' +Apr 29 19:16:40.940: INFO: stderr: "" +Apr 29 19:16:40.940: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Apr 29 19:16:40.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-6720 diff -f -' +Apr 29 19:16:41.386: INFO: rc: 1 +Apr 29 19:16:41.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-6720 delete -f -' +Apr 29 19:16:41.471: INFO: stderr: "" +Apr 29 19:16:41.471: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:41.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6720" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":189,"skipped":3201,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:41.489: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:16:41.870: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:16:44.907: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:16:44.913: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-937-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:48.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4186" for this suite. +STEP: Destroying namespace "webhook-4186-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:6.751 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":190,"skipped":3242,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:48.242: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename ingressclass +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 +[It] should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Apr 29 19:16:48.395: INFO: starting watch +STEP: patching +STEP: updating +Apr 29 19:16:48.426: INFO: waiting for watch events with expected annotations +Apr 29 19:16:48.426: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:48.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-7234" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":346,"completed":191,"skipped":3267,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:48.491: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir volume type on tmpfs +Apr 29 19:16:48.541: INFO: Waiting up to 5m0s for pod "pod-07ac303e-f557-4a31-b0bf-8814f7482fdc" in namespace "emptydir-9938" to be "Succeeded or Failed" +Apr 29 19:16:48.547: INFO: Pod "pod-07ac303e-f557-4a31-b0bf-8814f7482fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.465189ms +Apr 29 19:16:50.555: INFO: Pod "pod-07ac303e-f557-4a31-b0bf-8814f7482fdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013886515s +STEP: Saw pod success +Apr 29 19:16:50.555: INFO: Pod "pod-07ac303e-f557-4a31-b0bf-8814f7482fdc" satisfied condition "Succeeded or Failed" +Apr 29 19:16:50.563: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-07ac303e-f557-4a31-b0bf-8814f7482fdc container test-container: +STEP: delete the pod +Apr 29 19:16:50.583: INFO: Waiting for pod pod-07ac303e-f557-4a31-b0bf-8814f7482fdc to disappear +Apr 29 19:16:50.587: INFO: Pod pod-07ac303e-f557-4a31-b0bf-8814f7482fdc no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:16:50.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9938" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":192,"skipped":3273,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:16:50.603: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5124.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5124.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Apr 29 19:16:52.708: INFO: DNS probes using dns-test-4635e0c6-6775-4455-b0a7-9fd707f69758 succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5124.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5124.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Apr 29 19:16:56.767: INFO: File wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:16:56.774: INFO: File jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:16:56.774: INFO: Lookups using dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 failed for: [wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local] + +Apr 29 19:17:01.796: INFO: File wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:01.806: INFO: File jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:01.806: INFO: Lookups using dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 failed for: [wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local] + +Apr 29 19:17:06.786: INFO: File wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:06.791: INFO: File jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:06.791: INFO: Lookups using dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 failed for: [wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local] + +Apr 29 19:17:11.785: INFO: File wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:11.793: INFO: File jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:11.793: INFO: Lookups using dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 failed for: [wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local] + +Apr 29 19:17:16.783: INFO: File wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:16.791: INFO: File jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:16.791: INFO: Lookups using dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 failed for: [wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local] + +Apr 29 19:17:21.783: INFO: File wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:21.789: INFO: File jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local from pod dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 contains 'foo.example.com. +' instead of 'bar.example.com.' +Apr 29 19:17:21.789: INFO: Lookups using dns-5124/dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 failed for: [wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local] + +Apr 29 19:17:26.789: INFO: DNS probes using dns-test-5c3cc909-2fda-41cf-aa6b-b0cc22b84627 succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5124.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5124.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5124.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5124.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Apr 29 19:17:28.884: INFO: DNS probes using dns-test-3b56a565-a904-4e55-84b3-af71873fc331 succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:17:28.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5124" for this suite. + +• [SLOW TEST:38.322 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":193,"skipped":3293,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:17:28.926: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:17:33.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3990" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":194,"skipped":3334,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:17:33.028: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +Apr 29 19:17:33.126: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:17:35.133: INFO: The status of Pod pod-adoption-release is Running (Ready = true) +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Apr 29 19:17:36.164: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:17:37.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-2814" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":195,"skipped":3373,"failed":0} +SSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:17:37.213: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Apr 29 19:17:37.297: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:17:37.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-3811" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":196,"skipped":3381,"failed":0} +SS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:17:37.337: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Apr 29 19:17:37.393: INFO: The status of Pod pod-update-556badcf-99e5-4692-ac16-2a9841006a6b is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:17:39.400: INFO: The status of Pod pod-update-556badcf-99e5-4692-ac16-2a9841006a6b is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Apr 29 19:17:39.927: INFO: Successfully updated pod "pod-update-556badcf-99e5-4692-ac16-2a9841006a6b" +STEP: verifying the updated pod is in kubernetes +Apr 29 19:17:39.936: INFO: Pod update OK +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:17:39.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2323" for this suite. +•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":197,"skipped":3383,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:17:39.952: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:17:40.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-8304" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":198,"skipped":3396,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:17:40.063: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-projected-all-test-volume-9f10a551-7f88-49a7-befe-4705e67813bd +STEP: Creating secret with name secret-projected-all-test-volume-36b23d03-0901-464b-ad27-ffc5b94bba39 +STEP: Creating a pod to test Check all projections for projected volume plugin +Apr 29 19:17:40.125: INFO: Waiting up to 5m0s for pod "projected-volume-bb1f2099-a187-4515-9254-b3266713f52d" in namespace "projected-2095" to be "Succeeded or Failed" +Apr 29 19:17:40.129: INFO: Pod "projected-volume-bb1f2099-a187-4515-9254-b3266713f52d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448637ms +Apr 29 19:17:42.137: INFO: Pod "projected-volume-bb1f2099-a187-4515-9254-b3266713f52d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01154919s +STEP: Saw pod success +Apr 29 19:17:42.137: INFO: Pod "projected-volume-bb1f2099-a187-4515-9254-b3266713f52d" satisfied condition "Succeeded or Failed" +Apr 29 19:17:42.143: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod projected-volume-bb1f2099-a187-4515-9254-b3266713f52d container projected-all-volume-test: +STEP: delete the pod +Apr 29 19:17:42.172: INFO: Waiting for pod projected-volume-bb1f2099-a187-4515-9254-b3266713f52d to disappear +Apr 29 19:17:42.177: INFO: Pod projected-volume-bb1f2099-a187-4515-9254-b3266713f52d no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:17:42.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2095" for this suite. +•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":199,"skipped":3411,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:17:42.193: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:17:42.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-8397 version' +Apr 29 19:17:42.312: INFO: stderr: "" +Apr 29 19:17:42.312: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.4\", GitCommit:\"b695d79d4f967c403a96986f1750a35eb75e75f1\", GitTreeState:\"clean\", BuildDate:\"2021-11-17T15:48:33Z\", GoVersion:\"go1.16.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.8+vmware.1\", GitCommit:\"d797572df69c3951e4e8d495bf7720b594fd1c43\", GitTreeState:\"clean\", BuildDate:\"2022-03-21T23:17:28Z\", GoVersion:\"go1.16.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:17:42.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8397" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":346,"completed":200,"skipped":3426,"failed":0} +S +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:17:42.329: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in container's args +Apr 29 19:17:42.392: INFO: Waiting up to 5m0s for pod "var-expansion-731d8a0b-bed1-4e98-ba32-870cd012d6a3" in namespace "var-expansion-212" to be "Succeeded or Failed" +Apr 29 19:17:42.399: INFO: Pod "var-expansion-731d8a0b-bed1-4e98-ba32-870cd012d6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.234895ms +Apr 29 19:17:44.405: INFO: Pod "var-expansion-731d8a0b-bed1-4e98-ba32-870cd012d6a3": Phase="Running", Reason="", readiness=true. Elapsed: 2.013623023s +Apr 29 19:17:46.410: INFO: Pod "var-expansion-731d8a0b-bed1-4e98-ba32-870cd012d6a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018808963s +STEP: Saw pod success +Apr 29 19:17:46.411: INFO: Pod "var-expansion-731d8a0b-bed1-4e98-ba32-870cd012d6a3" satisfied condition "Succeeded or Failed" +Apr 29 19:17:46.415: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod var-expansion-731d8a0b-bed1-4e98-ba32-870cd012d6a3 container dapi-container: +STEP: delete the pod +Apr 29 19:17:46.434: INFO: Waiting for pod var-expansion-731d8a0b-bed1-4e98-ba32-870cd012d6a3 to disappear +Apr 29 19:17:46.438: INFO: Pod var-expansion-731d8a0b-bed1-4e98-ba32-870cd012d6a3 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:17:46.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-212" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":201,"skipped":3427,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:17:46.454: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: set up a multi version CRD +Apr 29 19:17:46.510: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:18:27.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-2133" for this suite. + +• [SLOW TEST:41.359 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":202,"skipped":3487,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:18:27.814: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Apr 29 19:18:38.021: INFO: The status of Pod kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf is Running (Ready = true) +Apr 29 19:18:38.359: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Apr 29 19:18:38.359: INFO: Deleting pod "simpletest-rc-to-be-deleted-2f7np" in namespace "gc-5893" +Apr 29 19:18:38.377: INFO: Deleting pod "simpletest-rc-to-be-deleted-6q5td" in namespace "gc-5893" +Apr 29 19:18:38.389: INFO: Deleting pod "simpletest-rc-to-be-deleted-8cbkr" in namespace "gc-5893" +Apr 29 19:18:38.408: INFO: Deleting pod "simpletest-rc-to-be-deleted-bbmq8" in namespace "gc-5893" +Apr 29 19:18:38.421: INFO: Deleting pod "simpletest-rc-to-be-deleted-fkmdw" in namespace "gc-5893" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:18:38.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-5893" for this suite. + +• [SLOW TEST:10.635 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":203,"skipped":3524,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:18:38.449: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create set of pods +Apr 29 19:18:38.511: INFO: created test-pod-1 +Apr 29 19:18:38.516: INFO: created test-pod-2 +Apr 29 19:18:38.522: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be located +STEP: waiting for all pods to be deleted +Apr 29 19:18:38.554: INFO: Pod quantity 3 is different from expected quantity 0 +Apr 29 19:18:39.562: INFO: Pod quantity 3 is different from expected quantity 0 +Apr 29 19:18:40.562: INFO: Pod quantity 3 is different from expected quantity 0 +Apr 29 19:18:41.562: INFO: Pod quantity 2 is different from expected quantity 0 +Apr 29 19:18:42.562: INFO: Pod quantity 1 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:18:43.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-850" for this suite. + +• [SLOW TEST:5.148 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":204,"skipped":3538,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:18:43.598: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Apr 29 19:18:43.668: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Apr 29 19:18:43.684: INFO: Waiting for terminating namespaces to be deleted... +Apr 29 19:18:43.692: INFO: +Logging pods the apiserver thinks is on node tkg-mgmt-vc-control-plane-4czbf before test +Apr 29 19:18:43.721: INFO: ako-0 from avi-system started at 2022-04-29 17:51:41 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container ako-tkg-system-tkg-mgmt-vc ready: true, restart count 0 +Apr 29 19:18:43.722: INFO: capi-kubeadm-bootstrap-controller-manager-7ffb6dc8fc-8l5kl from capi-kubeadm-bootstrap-system started at 2022-04-29 01:35:11 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container manager ready: true, restart count 12 +Apr 29 19:18:43.722: INFO: capi-kubeadm-control-plane-controller-manager-667999fdb8-twv4s from capi-kubeadm-control-plane-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container manager ready: true, restart count 2 +Apr 29 19:18:43.722: INFO: capi-controller-manager-65c5769c4c-555gx from capi-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container manager ready: true, restart count 15 +Apr 29 19:18:43.722: INFO: capv-controller-manager-75bdbfb7dc-888vj from capv-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container manager ready: true, restart count 15 +Apr 29 19:18:43.722: INFO: cert-manager-cainjector-cc485fcdc-4qq4t from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container cert-manager ready: true, restart count 7 +Apr 29 19:18:43.722: INFO: cert-manager-d6b468546-pctjx from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container cert-manager ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: cert-manager-webhook-dd697458d-c6xrg from cert-manager started at 2022-04-29 14:28:54 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container cert-manager ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: antrea-agent-k79rx from kube-system started at 2022-04-28 17:17:44 +0000 UTC (2 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container antrea-agent ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: Container antrea-ovs ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: antrea-controller-f84fc8fd6-clc5q from kube-system started at 2022-04-29 00:56:26 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container antrea-controller ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: coredns-67c8559bb6-7k2mz from kube-system started at 2022-04-28 17:12:06 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container coredns ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: coredns-67c8559bb6-bgthp from kube-system started at 2022-04-28 17:12:06 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container coredns ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: etcd-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container etcd ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: kube-apiserver-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container kube-apiserver ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container kube-controller-manager ready: true, restart count 18 +Apr 29 19:18:43.722: INFO: kube-proxy-2fvxm from kube-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container kube-proxy ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: kube-scheduler-tkg-mgmt-vc-control-plane-4czbf from kube-system started at 2022-04-29 16:30:09 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container kube-scheduler ready: true, restart count 18 +Apr 29 19:18:43.722: INFO: metrics-server-58bbfb986f-7q897 from kube-system started at 2022-04-29 14:28:58 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container metrics-server ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: vsphere-cloud-controller-manager-9gc8w from kube-system started at 2022-04-28 17:16:39 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 19 +Apr 29 19:18:43.722: INFO: vsphere-csi-controller-7d96796c4d-p276x from kube-system started at 2022-04-28 17:16:08 +0000 UTC (5 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container csi-attacher ready: true, restart count 21 +Apr 29 19:18:43.722: INFO: Container csi-provisioner ready: true, restart count 22 +Apr 29 19:18:43.722: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: Container vsphere-csi-controller ready: true, restart count 2 +Apr 29 19:18:43.722: INFO: Container vsphere-syncer ready: true, restart count 18 +Apr 29 19:18:43.722: INFO: vsphere-csi-node-ld676 from kube-system started at 2022-04-28 17:16:08 +0000 UTC (3 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: Container node-driver-registrar ready: true, restart count 2 +Apr 29 19:18:43.722: INFO: Container vsphere-csi-node ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2kxj9 from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 19:18:43.722: INFO: Container systemd-logs ready: false, restart count 0 +Apr 29 19:18:43.722: INFO: secretgen-controller-6dd9c95967-hfpnj from tanzu-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container secretgen-controller ready: true, restart count 0 +Apr 29 19:18:43.722: INFO: ako-operator-controller-manager-79cb9ccfc8-lwlw6 from tkg-system-networking started at 2022-04-29 17:51:37 +0000 UTC (2 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container kube-rbac-proxy ready: true, restart count 0 +Apr 29 19:18:43.722: INFO: Container manager ready: true, restart count 0 +Apr 29 19:18:43.722: INFO: kapp-controller-5b7d886dcc-rg8d8 from tkg-system started at 2022-04-28 17:10:49 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container kapp-controller ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: tanzu-addons-controller-manager-667d5c846f-f78n7 from tkg-system started at 2022-04-28 17:13:30 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container tanzu-addons-controller ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: tanzu-capabilities-controller-manager-7864dcb4b7-9jhgh from tkg-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container manager ready: true, restart count 0 +Apr 29 19:18:43.722: INFO: tanzu-featuregates-controller-manager-fb8cf8ffc-qptgc from tkg-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container manager ready: true, restart count 0 +Apr 29 19:18:43.722: INFO: tkr-controller-manager-7c99874659-rqlgx from tkr-system started at 2022-04-29 17:51:37 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.722: INFO: Container manager ready: true, restart count 1 +Apr 29 19:18:43.722: INFO: +Logging pods the apiserver thinks is on node tkg-mgmt-vc-md-0-59d8b7c778-msxpc before test +Apr 29 19:18:43.741: INFO: antrea-agent-jmd5f from kube-system started at 2022-04-28 17:17:22 +0000 UTC (2 container statuses recorded) +Apr 29 19:18:43.741: INFO: Container antrea-agent ready: true, restart count 1 +Apr 29 19:18:43.741: INFO: Container antrea-ovs ready: true, restart count 1 +Apr 29 19:18:43.741: INFO: kube-proxy-gqrhv from kube-system started at 2022-04-28 17:12:43 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.741: INFO: Container kube-proxy ready: true, restart count 1 +Apr 29 19:18:43.741: INFO: vsphere-csi-node-fxcc9 from kube-system started at 2022-04-28 17:16:08 +0000 UTC (3 container statuses recorded) +Apr 29 19:18:43.742: INFO: Container liveness-probe ready: true, restart count 1 +Apr 29 19:18:43.742: INFO: Container node-driver-registrar ready: true, restart count 4 +Apr 29 19:18:43.742: INFO: Container vsphere-csi-node ready: true, restart count 1 +Apr 29 19:18:43.742: INFO: sonobuoy from sonobuoy started at 2022-04-29 18:13:55 +0000 UTC (1 container statuses recorded) +Apr 29 19:18:43.742: INFO: Container kube-sonobuoy ready: true, restart count 0 +Apr 29 19:18:43.742: INFO: sonobuoy-e2e-job-d928f42f9304448b from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 19:18:43.742: INFO: Container e2e ready: true, restart count 0 +Apr 29 19:18:43.742: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 19:18:43.742: INFO: sonobuoy-systemd-logs-daemon-set-577f23acb8f64f96-2lph2 from sonobuoy started at 2022-04-29 18:13:57 +0000 UTC (2 container statuses recorded) +Apr 29 19:18:43.742: INFO: Container sonobuoy-worker ready: true, restart count 0 +Apr 29 19:18:43.742: INFO: Container systemd-logs ready: false, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-b373edd7-d9c8-4633-9a78-3ed30c4535ec 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-b373edd7-d9c8-4633-9a78-3ed30c4535ec off the node tkg-mgmt-vc-md-0-59d8b7c778-msxpc +STEP: verifying the node doesn't have the label kubernetes.io/e2e-b373edd7-d9c8-4633-9a78-3ed30c4535ec +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:18:47.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-8255" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":346,"completed":205,"skipped":3558,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:18:47.897: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:18:47.969: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:18:54.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-2765" for this suite. + +• [SLOW TEST:6.926 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":346,"completed":206,"skipped":3560,"failed":0} +SSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:18:54.823: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5193.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5193.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5193.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5193.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5193.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5193.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Apr 29 19:18:58.939: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:18:58.944: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:18:58.950: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:18:58.957: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:18:58.973: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:18:58.979: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:18:58.983: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:18:58.988: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:18:58.998: INFO: Lookups using dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local] + +Apr 29 19:19:04.007: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:04.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:04.019: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:04.025: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:04.044: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:04.050: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:04.055: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:04.061: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:04.074: INFO: Lookups using dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local] + +Apr 29 19:19:09.007: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:09.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:09.020: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:09.027: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:09.044: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:09.051: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:09.058: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:09.064: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:09.075: INFO: Lookups using dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local] + +Apr 29 19:19:14.007: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:14.015: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:14.022: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:14.032: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:14.055: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:14.063: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:14.070: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:14.077: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:14.090: INFO: Lookups using dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local] + +Apr 29 19:19:19.008: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:19.016: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:19.024: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:19.031: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:19.055: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:19.062: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:19.069: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:19.075: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:19.089: INFO: Lookups using dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local] + +Apr 29 19:19:24.007: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:24.013: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:24.021: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:24.029: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:24.048: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:24.055: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:24.062: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:24.068: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local from pod dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4: the server could not find the requested resource (get pods dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4) +Apr 29 19:19:24.083: INFO: Lookups using dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5193.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5193.svc.cluster.local jessie_udp@dns-test-service-2.dns-5193.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5193.svc.cluster.local] + +Apr 29 19:19:29.080: INFO: DNS probes using dns-5193/dns-test-16dfdc92-9d7e-442c-8d9f-ca5b6cd94fc4 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:19:29.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5193" for this suite. + +• [SLOW TEST:34.325 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":207,"skipped":3565,"failed":0} +S +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:19:29.148: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Apr 29 19:19:29.200: INFO: Waiting up to 1m0s for all nodes to be ready +Apr 29 19:20:29.279: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Apr 29 19:20:29.337: INFO: Created pod: pod0-0-sched-preemption-low-priority +Apr 29 19:20:29.350: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Apr 29 19:20:29.371: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Apr 29 19:20:29.379: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:20:41.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-2882" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:72.410 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":208,"skipped":3566,"failed":0} +SSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:20:41.558: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test override command +Apr 29 19:20:41.610: INFO: Waiting up to 5m0s for pod "client-containers-369d6fb0-7f21-4ed3-8789-c2eb9cf90095" in namespace "containers-7039" to be "Succeeded or Failed" +Apr 29 19:20:41.619: INFO: Pod "client-containers-369d6fb0-7f21-4ed3-8789-c2eb9cf90095": Phase="Pending", Reason="", readiness=false. Elapsed: 9.112942ms +Apr 29 19:20:43.632: INFO: Pod "client-containers-369d6fb0-7f21-4ed3-8789-c2eb9cf90095": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022170493s +STEP: Saw pod success +Apr 29 19:20:43.632: INFO: Pod "client-containers-369d6fb0-7f21-4ed3-8789-c2eb9cf90095" satisfied condition "Succeeded or Failed" +Apr 29 19:20:43.638: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod client-containers-369d6fb0-7f21-4ed3-8789-c2eb9cf90095 container agnhost-container: +STEP: delete the pod +Apr 29 19:20:43.668: INFO: Waiting for pod client-containers-369d6fb0-7f21-4ed3-8789-c2eb9cf90095 to disappear +Apr 29 19:20:43.672: INFO: Pod client-containers-369d6fb0-7f21-4ed3-8789-c2eb9cf90095 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:20:43.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-7039" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":209,"skipped":3569,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:20:43.683: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-585c2b81-fd08-424d-8901-13be7e8c191a +STEP: Creating a pod to test consume secrets +Apr 29 19:20:43.752: INFO: Waiting up to 5m0s for pod "pod-secrets-71fee553-c67e-4f64-991f-4e1496ddff49" in namespace "secrets-228" to be "Succeeded or Failed" +Apr 29 19:20:43.758: INFO: Pod "pod-secrets-71fee553-c67e-4f64-991f-4e1496ddff49": Phase="Pending", Reason="", readiness=false. Elapsed: 5.737321ms +Apr 29 19:20:45.768: INFO: Pod "pod-secrets-71fee553-c67e-4f64-991f-4e1496ddff49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016339829s +STEP: Saw pod success +Apr 29 19:20:45.769: INFO: Pod "pod-secrets-71fee553-c67e-4f64-991f-4e1496ddff49" satisfied condition "Succeeded or Failed" +Apr 29 19:20:45.786: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-secrets-71fee553-c67e-4f64-991f-4e1496ddff49 container secret-volume-test: +STEP: delete the pod +Apr 29 19:20:45.810: INFO: Waiting for pod pod-secrets-71fee553-c67e-4f64-991f-4e1496ddff49 to disappear +Apr 29 19:20:45.814: INFO: Pod pod-secrets-71fee553-c67e-4f64-991f-4e1496ddff49 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:20:45.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-228" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":210,"skipped":3593,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:20:45.834: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Apr 29 19:20:45.898: INFO: The status of Pod pod-update-activedeadlineseconds-e407a86b-6817-401a-9033-0163e933ccae is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:20:47.906: INFO: The status of Pod pod-update-activedeadlineseconds-e407a86b-6817-401a-9033-0163e933ccae is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Apr 29 19:20:48.429: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e407a86b-6817-401a-9033-0163e933ccae" +Apr 29 19:20:48.429: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e407a86b-6817-401a-9033-0163e933ccae" in namespace "pods-1707" to be "terminated due to deadline exceeded" +Apr 29 19:20:48.432: INFO: Pod "pod-update-activedeadlineseconds-e407a86b-6817-401a-9033-0163e933ccae": Phase="Running", Reason="", readiness=true. Elapsed: 3.358308ms +Apr 29 19:20:50.438: INFO: Pod "pod-update-activedeadlineseconds-e407a86b-6817-401a-9033-0163e933ccae": Phase="Running", Reason="", readiness=true. Elapsed: 2.008832266s +Apr 29 19:20:52.445: INFO: Pod "pod-update-activedeadlineseconds-e407a86b-6817-401a-9033-0163e933ccae": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.015391892s +Apr 29 19:20:52.445: INFO: Pod "pod-update-activedeadlineseconds-e407a86b-6817-401a-9033-0163e933ccae" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:20:52.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1707" for this suite. + +• [SLOW TEST:6.643 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":211,"skipped":3626,"failed":0} +SSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:20:52.477: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sysctl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl +STEP: Watching for error events or started pod +STEP: Waiting for pod completion +STEP: Checking that the pod succeeded +STEP: Getting logs from the pod +STEP: Checking that the sysctl is actually updated +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:20:54.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-9387" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":212,"skipped":3632,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:20:54.578: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a replication controller +Apr 29 19:20:54.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 create -f -' +Apr 29 19:20:57.349: INFO: stderr: "" +Apr 29 19:20:57.349: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Apr 29 19:20:57.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Apr 29 19:20:57.443: INFO: stderr: "" +Apr 29 19:20:57.443: INFO: stdout: "update-demo-nautilus-ksq84 update-demo-nautilus-xnlb4 " +Apr 29 19:20:57.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-ksq84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 19:20:57.521: INFO: stderr: "" +Apr 29 19:20:57.521: INFO: stdout: "" +Apr 29 19:20:57.521: INFO: update-demo-nautilus-ksq84 is created but not running +Apr 29 19:21:02.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Apr 29 19:21:02.614: INFO: stderr: "" +Apr 29 19:21:02.614: INFO: stdout: "update-demo-nautilus-ksq84 update-demo-nautilus-xnlb4 " +Apr 29 19:21:02.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-ksq84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 19:21:02.687: INFO: stderr: "" +Apr 29 19:21:02.687: INFO: stdout: "true" +Apr 29 19:21:02.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-ksq84 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Apr 29 19:21:02.761: INFO: stderr: "" +Apr 29 19:21:02.761: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Apr 29 19:21:02.761: INFO: validating pod update-demo-nautilus-ksq84 +Apr 29 19:21:02.782: INFO: got data: { + "image": "nautilus.jpg" +} + +Apr 29 19:21:02.783: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Apr 29 19:21:02.783: INFO: update-demo-nautilus-ksq84 is verified up and running +Apr 29 19:21:02.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-xnlb4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 19:21:02.857: INFO: stderr: "" +Apr 29 19:21:02.857: INFO: stdout: "true" +Apr 29 19:21:02.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-xnlb4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Apr 29 19:21:02.941: INFO: stderr: "" +Apr 29 19:21:02.941: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Apr 29 19:21:02.941: INFO: validating pod update-demo-nautilus-xnlb4 +Apr 29 19:21:02.951: INFO: got data: { + "image": "nautilus.jpg" +} + +Apr 29 19:21:02.951: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Apr 29 19:21:02.951: INFO: update-demo-nautilus-xnlb4 is verified up and running +STEP: scaling down the replication controller +Apr 29 19:21:02.955: INFO: scanned /root for discovery docs: +Apr 29 19:21:02.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Apr 29 19:21:04.062: INFO: stderr: "" +Apr 29 19:21:04.062: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Apr 29 19:21:04.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Apr 29 19:21:04.141: INFO: stderr: "" +Apr 29 19:21:04.141: INFO: stdout: "update-demo-nautilus-ksq84 update-demo-nautilus-xnlb4 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Apr 29 19:21:09.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Apr 29 19:21:09.221: INFO: stderr: "" +Apr 29 19:21:09.221: INFO: stdout: "update-demo-nautilus-ksq84 " +Apr 29 19:21:09.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-ksq84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 19:21:09.309: INFO: stderr: "" +Apr 29 19:21:09.309: INFO: stdout: "true" +Apr 29 19:21:09.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-ksq84 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Apr 29 19:21:09.390: INFO: stderr: "" +Apr 29 19:21:09.390: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Apr 29 19:21:09.390: INFO: validating pod update-demo-nautilus-ksq84 +Apr 29 19:21:09.397: INFO: got data: { + "image": "nautilus.jpg" +} + +Apr 29 19:21:09.397: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Apr 29 19:21:09.397: INFO: update-demo-nautilus-ksq84 is verified up and running +STEP: scaling up the replication controller +Apr 29 19:21:09.401: INFO: scanned /root for discovery docs: +Apr 29 19:21:09.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Apr 29 19:21:10.511: INFO: stderr: "" +Apr 29 19:21:10.511: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Apr 29 19:21:10.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Apr 29 19:21:10.591: INFO: stderr: "" +Apr 29 19:21:10.591: INFO: stdout: "update-demo-nautilus-j7czz update-demo-nautilus-ksq84 " +Apr 29 19:21:10.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-j7czz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 19:21:10.661: INFO: stderr: "" +Apr 29 19:21:10.661: INFO: stdout: "" +Apr 29 19:21:10.661: INFO: update-demo-nautilus-j7czz is created but not running +Apr 29 19:21:15.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Apr 29 19:21:15.751: INFO: stderr: "" +Apr 29 19:21:15.751: INFO: stdout: "update-demo-nautilus-j7czz update-demo-nautilus-ksq84 " +Apr 29 19:21:15.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-j7czz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 19:21:15.831: INFO: stderr: "" +Apr 29 19:21:15.831: INFO: stdout: "true" +Apr 29 19:21:15.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-j7czz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Apr 29 19:21:15.911: INFO: stderr: "" +Apr 29 19:21:15.911: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Apr 29 19:21:15.911: INFO: validating pod update-demo-nautilus-j7czz +Apr 29 19:21:15.921: INFO: got data: { + "image": "nautilus.jpg" +} + +Apr 29 19:21:15.921: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Apr 29 19:21:15.921: INFO: update-demo-nautilus-j7czz is verified up and running +Apr 29 19:21:15.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-ksq84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Apr 29 19:21:16.006: INFO: stderr: "" +Apr 29 19:21:16.006: INFO: stdout: "true" +Apr 29 19:21:16.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods update-demo-nautilus-ksq84 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Apr 29 19:21:16.087: INFO: stderr: "" +Apr 29 19:21:16.087: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" +Apr 29 19:21:16.087: INFO: validating pod update-demo-nautilus-ksq84 +Apr 29 19:21:16.110: INFO: got data: { + "image": "nautilus.jpg" +} + +Apr 29 19:21:16.110: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Apr 29 19:21:16.110: INFO: update-demo-nautilus-ksq84 is verified up and running +STEP: using delete to clean up resources +Apr 29 19:21:16.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 delete --grace-period=0 --force -f -' +Apr 29 19:21:16.193: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Apr 29 19:21:16.193: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Apr 29 19:21:16.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get rc,svc -l name=update-demo --no-headers' +Apr 29 19:21:16.311: INFO: stderr: "No resources found in kubectl-7845 namespace.\n" +Apr 29 19:21:16.311: INFO: stdout: "" +Apr 29 19:21:16.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7845 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Apr 29 19:21:16.405: INFO: stderr: "" +Apr 29 19:21:16.405: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:21:16.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7845" for this suite. + +• [SLOW TEST:21.850 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":346,"completed":213,"skipped":3651,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:21:16.428: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Apr 29 19:21:16.559: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-751 0a9e25ca-8837-41a3-89d5-46729cfc1dbe 749901 0 2022-04-29 19:21:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-29 19:21:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:21:16.560: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-751 0a9e25ca-8837-41a3-89d5-46729cfc1dbe 749902 0 2022-04-29 19:21:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-29 19:21:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Apr 29 19:21:16.592: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-751 0a9e25ca-8837-41a3-89d5-46729cfc1dbe 749903 0 2022-04-29 19:21:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-29 19:21:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:21:16.592: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-751 0a9e25ca-8837-41a3-89d5-46729cfc1dbe 749904 0 2022-04-29 19:21:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-29 19:21:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:21:16.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-751" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":214,"skipped":3654,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:21:16.624: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:21:16.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-411" for this suite. +•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":215,"skipped":3756,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:21:16.770: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:21:17.316: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Apr 29 19:21:19.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856877, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856877, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856877, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856877, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:21:22.373: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:21:22.387: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1197-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:21:25.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6834" for this suite. +STEP: Destroying namespace "webhook-6834-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:9.052 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":216,"skipped":3772,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:21:25.824: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:21:27.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-7138" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":217,"skipped":3837,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:21:27.954: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:21:44.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6685" for this suite. + +• [SLOW TEST:16.406 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":218,"skipped":3841,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:21:44.360: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-map-8cc2588f-c0a9-40b5-b9bb-f58f0ce7bc7a +STEP: Creating a pod to test consume configMaps +Apr 29 19:21:44.413: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ddb77ea-8e6d-472a-a420-627603958bbf" in namespace "configmap-317" to be "Succeeded or Failed" +Apr 29 19:21:44.417: INFO: Pod "pod-configmaps-1ddb77ea-8e6d-472a-a420-627603958bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821018ms +Apr 29 19:21:46.424: INFO: Pod "pod-configmaps-1ddb77ea-8e6d-472a-a420-627603958bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01051941s +Apr 29 19:21:48.429: INFO: Pod "pod-configmaps-1ddb77ea-8e6d-472a-a420-627603958bbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01527349s +STEP: Saw pod success +Apr 29 19:21:48.429: INFO: Pod "pod-configmaps-1ddb77ea-8e6d-472a-a420-627603958bbf" satisfied condition "Succeeded or Failed" +Apr 29 19:21:48.433: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-1ddb77ea-8e6d-472a-a420-627603958bbf container agnhost-container: +STEP: delete the pod +Apr 29 19:21:48.457: INFO: Waiting for pod pod-configmaps-1ddb77ea-8e6d-472a-a420-627603958bbf to disappear +Apr 29 19:21:48.462: INFO: Pod pod-configmaps-1ddb77ea-8e6d-472a-a420-627603958bbf no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:21:48.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-317" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":219,"skipped":3868,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:21:48.477: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-132fc109-7793-4614-a469-6572885b6171 +STEP: Creating a pod to test consume configMaps +Apr 29 19:21:48.532: INFO: Waiting up to 5m0s for pod "pod-configmaps-052d468f-0392-4213-b818-c2cab478d3a5" in namespace "configmap-7063" to be "Succeeded or Failed" +Apr 29 19:21:48.537: INFO: Pod "pod-configmaps-052d468f-0392-4213-b818-c2cab478d3a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.874359ms +Apr 29 19:21:50.545: INFO: Pod "pod-configmaps-052d468f-0392-4213-b818-c2cab478d3a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012705205s +Apr 29 19:21:52.552: INFO: Pod "pod-configmaps-052d468f-0392-4213-b818-c2cab478d3a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020120761s +STEP: Saw pod success +Apr 29 19:21:52.552: INFO: Pod "pod-configmaps-052d468f-0392-4213-b818-c2cab478d3a5" satisfied condition "Succeeded or Failed" +Apr 29 19:21:52.557: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-052d468f-0392-4213-b818-c2cab478d3a5 container agnhost-container: +STEP: delete the pod +Apr 29 19:21:52.578: INFO: Waiting for pod pod-configmaps-052d468f-0392-4213-b818-c2cab478d3a5 to disappear +Apr 29 19:21:52.581: INFO: Pod pod-configmaps-052d468f-0392-4213-b818-c2cab478d3a5 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:21:52.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7063" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":220,"skipped":3883,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:21:52.594: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:21:52.648: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Apr 29 19:21:57.655: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Apr 29 19:21:57.655: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Apr 29 19:21:59.662: INFO: Creating deployment "test-rollover-deployment" +Apr 29 19:21:59.677: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Apr 29 19:22:01.688: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Apr 29 19:22:01.700: INFO: Ensure that both replica sets have 1 created replica +Apr 29 19:22:01.710: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Apr 29 19:22:01.722: INFO: Updating deployment test-rollover-deployment +Apr 29 19:22:01.722: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Apr 29 19:22:03.731: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Apr 29 19:22:03.740: INFO: Make sure deployment "test-rollover-deployment" is complete +Apr 29 19:22:03.750: INFO: all replica sets need to contain the pod-template-hash label +Apr 29 19:22:03.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856921, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 19:22:05.760: INFO: all replica sets need to contain the pod-template-hash label +Apr 29 19:22:05.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856923, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 19:22:07.763: INFO: all replica sets need to contain the pod-template-hash label +Apr 29 19:22:07.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856923, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 19:22:09.761: INFO: all replica sets need to contain the pod-template-hash label +Apr 29 19:22:09.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856923, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 19:22:11.775: INFO: all replica sets need to contain the pod-template-hash label +Apr 29 19:22:11.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856923, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 19:22:13.762: INFO: all replica sets need to contain the pod-template-hash label +Apr 29 19:22:13.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856923, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786856919, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 19:22:15.765: INFO: +Apr 29 19:22:15.765: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Apr 29 19:22:15.784: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-7456 7e5d2f36-5f76-4f5c-a468-b42b26219f28 750644 2 2022-04-29 19:21:59 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-29 19:22:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:22:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00a8e21c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-29 19:21:59 +0000 UTC,LastTransitionTime:2022-04-29 19:21:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2022-04-29 19:22:13 +0000 UTC,LastTransitionTime:2022-04-29 19:21:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Apr 29 19:22:15.791: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-7456 2fd45442-1c45-4af1-aad2-50a55611094e 750632 2 2022-04-29 19:22:01 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 7e5d2f36-5f76-4f5c-a468-b42b26219f28 0xc0063e3110 0xc0063e3111}] [] [{kube-controller-manager Update apps/v1 2022-04-29 19:22:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e5d2f36-5f76-4f5c-a468-b42b26219f28\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:22:13 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0063e31a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Apr 29 19:22:15.791: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Apr 29 19:22:15.791: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7456 ae79e7b4-d15e-4b96-9f98-ce1139977216 750643 2 2022-04-29 19:21:52 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 7e5d2f36-5f76-4f5c-a468-b42b26219f28 0xc0063e2ec7 0xc0063e2ec8}] [] [{e2e.test Update apps/v1 2022-04-29 19:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:22:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e5d2f36-5f76-4f5c-a468-b42b26219f28\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:22:13 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0063e2f88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Apr 29 19:22:15.791: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-7456 680ef4c2-fe4a-4e7e-9ab6-3d1b41e0c5ca 750539 2 2022-04-29 19:21:59 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 7e5d2f36-5f76-4f5c-a468-b42b26219f28 0xc0063e2ff7 0xc0063e2ff8}] [] [{kube-controller-manager Update apps/v1 2022-04-29 19:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e5d2f36-5f76-4f5c-a468-b42b26219f28\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:22:01 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0063e30a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Apr 29 19:22:15.798: INFO: Pod "test-rollover-deployment-98c5f4599-qllb5" is available: +&Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-qllb5 test-rollover-deployment-98c5f4599- deployment-7456 41e3b9fa-e2f1-4b9f-8a47-6129267db1d1 750562 0 2022-04-29 19:22:01 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 2fd45442-1c45-4af1-aad2-50a55611094e 0xc0063e36d0 0xc0063e36d1}] [] [{kube-controller-manager Update v1 2022-04-29 19:22:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2fd45442-1c45-4af1-aad2-50a55611094e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 19:22:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.79\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7l9tc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7l9tc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:22:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:22:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:22:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:22:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.79,StartTime:2022-04-29 19:22:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 19:22:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://5d01c4e70176c5948e4d5d8d337fa25bcc0f08c649195a211febe943fb38bd93,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:22:15.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7456" for this suite. + +• [SLOW TEST:23.229 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":221,"skipped":3892,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:22:15.824: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:22:15.897: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f8c86b8-6ab3-4f79-b47b-59143e6f14db" in namespace "projected-5117" to be "Succeeded or Failed" +Apr 29 19:22:15.907: INFO: Pod "downwardapi-volume-4f8c86b8-6ab3-4f79-b47b-59143e6f14db": Phase="Pending", Reason="", readiness=false. Elapsed: 9.932727ms +Apr 29 19:22:17.914: INFO: Pod "downwardapi-volume-4f8c86b8-6ab3-4f79-b47b-59143e6f14db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016125307s +STEP: Saw pod success +Apr 29 19:22:17.914: INFO: Pod "downwardapi-volume-4f8c86b8-6ab3-4f79-b47b-59143e6f14db" satisfied condition "Succeeded or Failed" +Apr 29 19:22:17.918: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-4f8c86b8-6ab3-4f79-b47b-59143e6f14db container client-container: +STEP: delete the pod +Apr 29 19:22:17.943: INFO: Waiting for pod downwardapi-volume-4f8c86b8-6ab3-4f79-b47b-59143e6f14db to disappear +Apr 29 19:22:17.948: INFO: Pod downwardapi-volume-4f8c86b8-6ab3-4f79-b47b-59143e6f14db no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:22:17.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5117" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":222,"skipped":3911,"failed":0} +S +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:22:17.969: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:22:24.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-5088" for this suite. + +• [SLOW TEST:6.097 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":223,"skipped":3912,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:22:24.068: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-map-210c9c6a-c5df-4c0f-a8d3-e4ffe712998c +STEP: Creating a pod to test consume secrets +Apr 29 19:22:24.130: INFO: Waiting up to 5m0s for pod "pod-secrets-2f0322fb-209f-4086-ab6e-b94de147910c" in namespace "secrets-9053" to be "Succeeded or Failed" +Apr 29 19:22:24.137: INFO: Pod "pod-secrets-2f0322fb-209f-4086-ab6e-b94de147910c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.187334ms +Apr 29 19:22:26.143: INFO: Pod "pod-secrets-2f0322fb-209f-4086-ab6e-b94de147910c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012867562s +Apr 29 19:22:28.149: INFO: Pod "pod-secrets-2f0322fb-209f-4086-ab6e-b94de147910c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018435565s +STEP: Saw pod success +Apr 29 19:22:28.149: INFO: Pod "pod-secrets-2f0322fb-209f-4086-ab6e-b94de147910c" satisfied condition "Succeeded or Failed" +Apr 29 19:22:28.153: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-secrets-2f0322fb-209f-4086-ab6e-b94de147910c container secret-volume-test: +STEP: delete the pod +Apr 29 19:22:28.173: INFO: Waiting for pod pod-secrets-2f0322fb-209f-4086-ab6e-b94de147910c to disappear +Apr 29 19:22:28.177: INFO: Pod pod-secrets-2f0322fb-209f-4086-ab6e-b94de147910c no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:22:28.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9053" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":224,"skipped":3948,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:22:28.194: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Apr 29 19:22:28.260: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:22:30.270: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Apr 29 19:22:30.287: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:22:32.294: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Apr 29 19:22:32.317: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Apr 29 19:22:32.324: INFO: Pod pod-with-poststart-http-hook still exists +Apr 29 19:22:34.324: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Apr 29 19:22:34.330: INFO: Pod pod-with-poststart-http-hook still exists +Apr 29 19:22:36.325: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Apr 29 19:22:36.330: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:22:36.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-360" for this suite. + +• [SLOW TEST:8.150 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":225,"skipped":4001,"failed":0} +SS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:22:36.344: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name projected-secret-test-ccd39ecd-95a4-4682-9c93-daea2f9443b4 +STEP: Creating a pod to test consume secrets +Apr 29 19:22:36.405: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-149c2871-9443-45f4-b4a2-f3db97f0a202" in namespace "projected-9568" to be "Succeeded or Failed" +Apr 29 19:22:36.423: INFO: Pod "pod-projected-secrets-149c2871-9443-45f4-b4a2-f3db97f0a202": Phase="Pending", Reason="", readiness=false. Elapsed: 18.667041ms +Apr 29 19:22:38.429: INFO: Pod "pod-projected-secrets-149c2871-9443-45f4-b4a2-f3db97f0a202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024354677s +STEP: Saw pod success +Apr 29 19:22:38.429: INFO: Pod "pod-projected-secrets-149c2871-9443-45f4-b4a2-f3db97f0a202" satisfied condition "Succeeded or Failed" +Apr 29 19:22:38.433: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-secrets-149c2871-9443-45f4-b4a2-f3db97f0a202 container projected-secret-volume-test: +STEP: delete the pod +Apr 29 19:22:38.454: INFO: Waiting for pod pod-projected-secrets-149c2871-9443-45f4-b4a2-f3db97f0a202 to disappear +Apr 29 19:22:38.457: INFO: Pod pod-projected-secrets-149c2871-9443-45f4-b4a2-f3db97f0a202 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:22:38.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9568" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":226,"skipped":4003,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:22:38.474: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Apr 29 19:22:38.533: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-424 f520de4f-eb25-4e02-8a63-70914c56a0aa 751074 0 2022-04-29 19:22:38 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 19:22:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:22:38.533: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-424 f520de4f-eb25-4e02-8a63-70914c56a0aa 751074 0 2022-04-29 19:22:38 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 19:22:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Apr 29 19:22:48.546: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-424 f520de4f-eb25-4e02-8a63-70914c56a0aa 751187 0 2022-04-29 19:22:38 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 19:22:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:22:48.546: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-424 f520de4f-eb25-4e02-8a63-70914c56a0aa 751187 0 2022-04-29 19:22:38 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 19:22:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Apr 29 19:22:58.558: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-424 f520de4f-eb25-4e02-8a63-70914c56a0aa 751263 0 2022-04-29 19:22:38 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 19:22:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:22:58.559: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-424 f520de4f-eb25-4e02-8a63-70914c56a0aa 751263 0 2022-04-29 19:22:38 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 19:22:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Apr 29 19:23:08.567: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-424 f520de4f-eb25-4e02-8a63-70914c56a0aa 751339 0 2022-04-29 19:22:38 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 19:22:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:23:08.567: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-424 f520de4f-eb25-4e02-8a63-70914c56a0aa 751339 0 2022-04-29 19:22:38 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-29 19:22:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Apr 29 19:23:18.580: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-424 cf1531ac-9613-4118-8a6f-aebc402b865c 751406 0 2022-04-29 19:23:18 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-29 19:23:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:23:18.580: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-424 cf1531ac-9613-4118-8a6f-aebc402b865c 751406 0 2022-04-29 19:23:18 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-29 19:23:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Apr 29 19:23:28.589: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-424 cf1531ac-9613-4118-8a6f-aebc402b865c 751475 0 2022-04-29 19:23:18 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-29 19:23:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Apr 29 19:23:28.589: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-424 cf1531ac-9613-4118-8a6f-aebc402b865c 751475 0 2022-04-29 19:23:18 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-29 19:23:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:23:38.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-424" for this suite. + +• [SLOW TEST:60.132 seconds] +[sig-api-machinery] Watchers +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":227,"skipped":4019,"failed":0} +SSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:23:38.607: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service endpoint-test2 in namespace services-5145 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5145 to expose endpoints map[] +Apr 29 19:23:38.673: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found +Apr 29 19:23:39.692: INFO: successfully validated that service endpoint-test2 in namespace services-5145 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-5145 +Apr 29 19:23:39.706: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:23:41.727: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5145 to expose endpoints map[pod1:[80]] +Apr 29 19:23:41.749: INFO: successfully validated that service endpoint-test2 in namespace services-5145 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 +Apr 29 19:23:41.749: INFO: Creating new exec pod +Apr 29 19:23:44.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-5145 exec execpodd9zmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Apr 29 19:23:44.979: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Apr 29 19:23:44.979: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:23:44.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-5145 exec execpodd9zmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.66.191.157 80' +Apr 29 19:23:45.180: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.66.191.157 80\nConnection to 100.66.191.157 80 port [tcp/http] succeeded!\n" +Apr 29 19:23:45.180: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-5145 +Apr 29 19:23:45.192: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:23:47.198: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:23:49.208: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5145 to expose endpoints map[pod1:[80] pod2:[80]] +Apr 29 19:23:49.234: INFO: successfully validated that service endpoint-test2 in namespace services-5145 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 +Apr 29 19:23:50.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-5145 exec execpodd9zmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Apr 29 19:23:50.424: INFO: stderr: "+ nc -v -t -w 2 endpoint-test2 80\n+ echo hostName\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Apr 29 19:23:50.424: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:23:50.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-5145 exec execpodd9zmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.66.191.157 80' +Apr 29 19:23:50.603: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.66.191.157 80\nConnection to 100.66.191.157 80 port [tcp/http] succeeded!\n" +Apr 29 19:23:50.603: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-5145 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5145 to expose endpoints map[pod2:[80]] +Apr 29 19:23:50.639: INFO: successfully validated that service endpoint-test2 in namespace services-5145 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 +Apr 29 19:23:51.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-5145 exec execpodd9zmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Apr 29 19:23:51.838: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Apr 29 19:23:51.838: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:23:51.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-5145 exec execpodd9zmp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.66.191.157 80' +Apr 29 19:23:52.024: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.66.191.157 80\nConnection to 100.66.191.157 80 port [tcp/http] succeeded!\n" +Apr 29 19:23:52.024: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-5145 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5145 to expose endpoints map[] +Apr 29 19:23:53.054: INFO: successfully validated that service endpoint-test2 in namespace services-5145 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:23:53.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5145" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:14.484 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":346,"completed":228,"skipped":4023,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:23:53.092: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-2037 +STEP: creating service affinity-nodeport in namespace services-2037 +STEP: creating replication controller affinity-nodeport in namespace services-2037 +I0429 19:23:53.171470 25 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-2037, replica count: 3 +I0429 19:23:56.222522 25 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 19:23:56.239: INFO: Creating new exec pod +Apr 29 19:23:59.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-2037 exec execpod-affinitytkdsw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Apr 29 19:23:59.472: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Apr 29 19:23:59.472: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:23:59.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-2037 exec execpod-affinitytkdsw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.66.178.246 80' +Apr 29 19:23:59.664: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.66.178.246 80\nConnection to 100.66.178.246 80 port [tcp/http] succeeded!\n" +Apr 29 19:23:59.664: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:23:59.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-2037 exec execpod-affinitytkdsw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.111.35 31449' +Apr 29 19:23:59.858: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.111.35 31449\nConnection to 10.180.111.35 31449 port [tcp/*] succeeded!\n" +Apr 29 19:23:59.858: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:23:59.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-2037 exec execpod-affinitytkdsw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.99.66 31449' +Apr 29 19:24:00.062: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.99.66 31449\nConnection to 10.180.99.66 31449 port [tcp/*] succeeded!\n" +Apr 29 19:24:00.062: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:24:00.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-2037 exec execpod-affinitytkdsw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.180.111.35:31449/ ; done' +Apr 29 19:24:00.395: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:31449/\n" +Apr 29 19:24:00.396: INFO: stdout: "\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg\naffinity-nodeport-mj9cg" +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Received response from host: affinity-nodeport-mj9cg +Apr 29 19:24:00.396: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-2037, will wait for the garbage collector to delete the pods +Apr 29 19:24:00.472: INFO: Deleting ReplicationController affinity-nodeport took: 11.019005ms +Apr 29 19:24:00.573: INFO: Terminating ReplicationController affinity-nodeport pods took: 101.301181ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:24:02.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2037" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:9.532 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":229,"skipped":4068,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:24:02.624: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Apr 29 19:24:04.689: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-7558 PodName:var-expansion-51d3bfe2-a4dc-43a2-a9f5-2451db8a1a9e ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:24:04.689: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: test for file in mounted path +Apr 29 19:24:04.800: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-7558 PodName:var-expansion-51d3bfe2-a4dc-43a2-a9f5-2451db8a1a9e ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:24:04.800: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: updating the annotation value +Apr 29 19:24:05.411: INFO: Successfully updated pod "var-expansion-51d3bfe2-a4dc-43a2-a9f5-2451db8a1a9e" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Apr 29 19:24:05.416: INFO: Deleting pod "var-expansion-51d3bfe2-a4dc-43a2-a9f5-2451db8a1a9e" in namespace "var-expansion-7558" +Apr 29 19:24:05.421: INFO: Wait up to 5m0s for pod "var-expansion-51d3bfe2-a4dc-43a2-a9f5-2451db8a1a9e" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:24:39.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-7558" for this suite. + +• [SLOW TEST:36.823 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":230,"skipped":4078,"failed":0} +S +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:24:39.447: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 +[It] should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Apr 29 19:24:39.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9577 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Apr 29 19:24:39.595: INFO: stderr: "" +Apr 29 19:24:39.595: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Apr 29 19:24:44.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9577 get pod e2e-test-httpd-pod -o json' +Apr 29 19:24:44.717: INFO: stderr: "" +Apr 29 19:24:44.717: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2022-04-29T19:24:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9577\",\n \"resourceVersion\": \"752202\",\n \"uid\": \"66aa3ed8-da21-45ab-bd33-eefcda112e48\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-t7pzl\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"tkg-mgmt-vc-md-0-59d8b7c778-msxpc\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-t7pzl\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-29T19:24:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-29T19:24:41Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-29T19:24:41Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-29T19:24:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c5060966dd345c94ccb4699f059540715a0dca63d44c04073294544f079d0aaa\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-04-29T19:24:40Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.180.99.66\",\n \"phase\": \"Running\",\n \"podIP\": \"100.96.1.95\",\n \"podIPs\": [\n {\n \"ip\": \"100.96.1.95\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-04-29T19:24:39Z\"\n }\n}\n" +STEP: replace the image in the pod +Apr 29 19:24:44.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9577 replace -f -' +Apr 29 19:24:47.718: INFO: stderr: "" +Apr 29 19:24:47.718: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 +[AfterEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 +Apr 29 19:24:47.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9577 delete pods e2e-test-httpd-pod' +Apr 29 19:24:58.646: INFO: stderr: "" +Apr 29 19:24:58.646: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:24:58.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9577" for this suite. + +• [SLOW TEST:19.215 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":346,"completed":231,"skipped":4079,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:24:58.663: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:24:58.713: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-d8fb6dee-f740-4abc-b29f-a92019aba2d0" in namespace "security-context-test-6408" to be "Succeeded or Failed" +Apr 29 19:24:58.722: INFO: Pod "alpine-nnp-false-d8fb6dee-f740-4abc-b29f-a92019aba2d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263334ms +Apr 29 19:25:00.728: INFO: Pod "alpine-nnp-false-d8fb6dee-f740-4abc-b29f-a92019aba2d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014214356s +Apr 29 19:25:00.728: INFO: Pod "alpine-nnp-false-d8fb6dee-f740-4abc-b29f-a92019aba2d0" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:25:00.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-6408" for this suite. +•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":232,"skipped":4102,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:25:00.762: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0644 on node default medium +Apr 29 19:25:00.829: INFO: Waiting up to 5m0s for pod "pod-cb43c738-770d-40f1-bbbd-e38b5fdde5c0" in namespace "emptydir-4890" to be "Succeeded or Failed" +Apr 29 19:25:00.833: INFO: Pod "pod-cb43c738-770d-40f1-bbbd-e38b5fdde5c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.516709ms +Apr 29 19:25:02.839: INFO: Pod "pod-cb43c738-770d-40f1-bbbd-e38b5fdde5c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009295108s +STEP: Saw pod success +Apr 29 19:25:02.839: INFO: Pod "pod-cb43c738-770d-40f1-bbbd-e38b5fdde5c0" satisfied condition "Succeeded or Failed" +Apr 29 19:25:02.842: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-cb43c738-770d-40f1-bbbd-e38b5fdde5c0 container test-container: +STEP: delete the pod +Apr 29 19:25:02.860: INFO: Waiting for pod pod-cb43c738-770d-40f1-bbbd-e38b5fdde5c0 to disappear +Apr 29 19:25:02.864: INFO: Pod pod-cb43c738-770d-40f1-bbbd-e38b5fdde5c0 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:25:02.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4890" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":233,"skipped":4161,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:25:02.880: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ForbidConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring no more jobs are scheduled +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:31:00.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-1024" for this suite. + +• [SLOW TEST:358.143 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":234,"skipped":4186,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:31:01.028: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:31:01.114: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8611c2b5-af30-4d5c-8852-3a48a6b47f21" in namespace "projected-6988" to be "Succeeded or Failed" +Apr 29 19:31:01.122: INFO: Pod "downwardapi-volume-8611c2b5-af30-4d5c-8852-3a48a6b47f21": Phase="Pending", Reason="", readiness=false. Elapsed: 7.447462ms +Apr 29 19:31:03.553: INFO: Pod "downwardapi-volume-8611c2b5-af30-4d5c-8852-3a48a6b47f21": Phase="Running", Reason="", readiness=true. Elapsed: 2.438608735s +Apr 29 19:31:05.560: INFO: Pod "downwardapi-volume-8611c2b5-af30-4d5c-8852-3a48a6b47f21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.445915259s +STEP: Saw pod success +Apr 29 19:31:05.561: INFO: Pod "downwardapi-volume-8611c2b5-af30-4d5c-8852-3a48a6b47f21" satisfied condition "Succeeded or Failed" +Apr 29 19:31:05.565: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-8611c2b5-af30-4d5c-8852-3a48a6b47f21 container client-container: +STEP: delete the pod +Apr 29 19:31:07.009: INFO: Waiting for pod downwardapi-volume-8611c2b5-af30-4d5c-8852-3a48a6b47f21 to disappear +Apr 29 19:31:07.017: INFO: Pod downwardapi-volume-8611c2b5-af30-4d5c-8852-3a48a6b47f21 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:31:07.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6988" for this suite. + +• [SLOW TEST:6.003 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":235,"skipped":4234,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:31:07.031: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:31:35.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3976" for this suite. + +• [SLOW TEST:28.104 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":236,"skipped":4242,"failed":0} +SSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:31:35.136: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +Apr 29 19:31:35.192: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:31:37.199: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:31:39.200: INFO: The status of Pod test-pod is Running (Ready = true) +STEP: Creating hostNetwork=true pod +Apr 29 19:31:39.216: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:31:41.237: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:31:43.269: INFO: The status of Pod test-host-network-pod is Running (Ready = true) +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Apr 29 19:31:43.275: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:43.275: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:43.419: INFO: Exec stderr: "" +Apr 29 19:31:43.419: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:43.419: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:43.520: INFO: Exec stderr: "" +Apr 29 19:31:43.520: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:43.520: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:43.643: INFO: Exec stderr: "" +Apr 29 19:31:43.643: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:43.643: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:43.747: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Apr 29 19:31:43.747: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:43.747: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:43.847: INFO: Exec stderr: "" +Apr 29 19:31:43.847: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:43.847: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:43.946: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Apr 29 19:31:43.946: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:43.946: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:44.065: INFO: Exec stderr: "" +Apr 29 19:31:44.065: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:44.065: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:44.171: INFO: Exec stderr: "" +Apr 29 19:31:44.171: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:44.171: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:44.294: INFO: Exec stderr: "" +Apr 29 19:31:44.294: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-747 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:31:44.294: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:31:44.406: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:31:44.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-747" for this suite. + +• [SLOW TEST:9.283 seconds] +[sig-node] KubeletManagedEtcHosts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":237,"skipped":4245,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:31:44.420: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename aggregator +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 +Apr 29 19:31:44.464: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the sample API server. +Apr 29 19:31:44.900: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created +Apr 29 19:31:46.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857504, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857504, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857504, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857504, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 19:31:49.335: INFO: Waited 339.008009ms for the sample-apiserver to be ready to handle requests. +I0429 19:31:50.407284 25 request.go:665] Waited for 1.017022643s due to client-side throttling, not priority and fairness, request: GET:https://100.64.0.1:443/apis/cluster.x-k8s.io/v1alpha3?timeout=32s +STEP: Read Status for v1alpha1.wardle.example.com +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' +STEP: List APIServices +Apr 29 19:31:51.573: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:31:52.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-1285" for this suite. + +• [SLOW TEST:8.044 seconds] +[sig-api-machinery] Aggregator +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":238,"skipped":4261,"failed":0} +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:31:52.464: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Apr 29 19:31:52.516: INFO: The status of Pod annotationupdate917b0f92-bff8-49f1-a8a0-b15fef02d049 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:31:54.526: INFO: The status of Pod annotationupdate917b0f92-bff8-49f1-a8a0-b15fef02d049 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:31:56.523: INFO: The status of Pod annotationupdate917b0f92-bff8-49f1-a8a0-b15fef02d049 is Running (Ready = true) +Apr 29 19:31:57.054: INFO: Successfully updated pod "annotationupdate917b0f92-bff8-49f1-a8a0-b15fef02d049" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:31:59.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3647" for this suite. + +• [SLOW TEST:6.622 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":239,"skipped":4261,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:31:59.087: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-ff076a96-daf4-46ae-be5a-10a1f717d9b2 +STEP: Creating a pod to test consume configMaps +Apr 29 19:31:59.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19eac4b7-a598-49ac-8b95-e03c8cbf9848" in namespace "projected-7629" to be "Succeeded or Failed" +Apr 29 19:31:59.152: INFO: Pod "pod-projected-configmaps-19eac4b7-a598-49ac-8b95-e03c8cbf9848": Phase="Pending", Reason="", readiness=false. Elapsed: 4.915142ms +Apr 29 19:32:01.159: INFO: Pod "pod-projected-configmaps-19eac4b7-a598-49ac-8b95-e03c8cbf9848": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011139084s +STEP: Saw pod success +Apr 29 19:32:01.159: INFO: Pod "pod-projected-configmaps-19eac4b7-a598-49ac-8b95-e03c8cbf9848" satisfied condition "Succeeded or Failed" +Apr 29 19:32:01.164: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-configmaps-19eac4b7-a598-49ac-8b95-e03c8cbf9848 container agnhost-container: +STEP: delete the pod +Apr 29 19:32:01.194: INFO: Waiting for pod pod-projected-configmaps-19eac4b7-a598-49ac-8b95-e03c8cbf9848 to disappear +Apr 29 19:32:01.199: INFO: Pod pod-projected-configmaps-19eac4b7-a598-49ac-8b95-e03c8cbf9848 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:32:01.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7629" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":240,"skipped":4273,"failed":0} +S +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:32:01.210: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Apr 29 19:32:01.252: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:32:10.586: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:32:40.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4048" for this suite. + +• [SLOW TEST:38.946 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":241,"skipped":4274,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:32:40.158: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-6980c867-ceb8-4e48-a523-99a9adb6997f +STEP: Creating a pod to test consume configMaps +Apr 29 19:32:40.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3abaa34d-3072-437b-950e-15c0fe7be5fd" in namespace "projected-9653" to be "Succeeded or Failed" +Apr 29 19:32:40.256: INFO: Pod "pod-projected-configmaps-3abaa34d-3072-437b-950e-15c0fe7be5fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.881561ms +Apr 29 19:32:42.262: INFO: Pod "pod-projected-configmaps-3abaa34d-3072-437b-950e-15c0fe7be5fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010710571s +STEP: Saw pod success +Apr 29 19:32:42.262: INFO: Pod "pod-projected-configmaps-3abaa34d-3072-437b-950e-15c0fe7be5fd" satisfied condition "Succeeded or Failed" +Apr 29 19:32:42.267: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-configmaps-3abaa34d-3072-437b-950e-15c0fe7be5fd container agnhost-container: +STEP: delete the pod +Apr 29 19:32:42.293: INFO: Waiting for pod pod-projected-configmaps-3abaa34d-3072-437b-950e-15c0fe7be5fd to disappear +Apr 29 19:32:42.302: INFO: Pod pod-projected-configmaps-3abaa34d-3072-437b-950e-15c0fe7be5fd no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:32:42.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9653" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":242,"skipped":4302,"failed":0} +SSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:32:42.320: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-8135 +STEP: creating service affinity-clusterip-transition in namespace services-8135 +STEP: creating replication controller affinity-clusterip-transition in namespace services-8135 +I0429 19:32:42.385982 25 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8135, replica count: 3 +I0429 19:32:45.437315 25 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 19:32:45.448: INFO: Creating new exec pod +Apr 29 19:32:48.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-8135 exec execpod-affinityrwxgp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Apr 29 19:32:49.393: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Apr 29 19:32:49.393: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:32:49.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-8135 exec execpod-affinityrwxgp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.66.87.194 80' +Apr 29 19:32:49.620: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.66.87.194 80\nConnection to 100.66.87.194 80 port [tcp/http] succeeded!\n" +Apr 29 19:32:49.620: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:32:49.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-8135 exec execpod-affinityrwxgp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.66.87.194:80/ ; done' +Apr 29 19:32:49.927: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n" +Apr 29 19:32:49.927: INFO: stdout: "\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-ncnqj\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-ncnqj\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-ncnqj\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-ncnqj" +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:49.927: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:49.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-8135 exec execpod-affinityrwxgp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.66.87.194:80/ ; done' +Apr 29 19:32:50.278: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n" +Apr 29 19:32:50.278: INFO: stdout: "\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-ncnqj\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-ncnqj\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-ncnqj\naffinity-clusterip-transition-ncnqj\naffinity-clusterip-transition-ncnqj\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-ncnqj\naffinity-clusterip-transition-szwsw\naffinity-clusterip-transition-szwsw" +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-ncnqj +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:32:50.278: INFO: Received response from host: affinity-clusterip-transition-szwsw +Apr 29 19:33:20.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-8135 exec execpod-affinityrwxgp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://100.66.87.194:80/ ; done' +Apr 29 19:33:20.543: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://100.66.87.194:80/\n" +Apr 29 19:33:20.543: INFO: stdout: "\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl\naffinity-clusterip-transition-nb2nl" +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Received response from host: affinity-clusterip-transition-nb2nl +Apr 29 19:33:20.543: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8135, will wait for the garbage collector to delete the pods +Apr 29 19:33:20.620: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.553058ms +Apr 29 19:33:20.721: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.93316ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:33:23.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8135" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:40.748 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":243,"skipped":4305,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:33:23.068: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +[It] should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ReplaceConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring the job is replaced with a new one +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:01.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-7753" for this suite. + +• [SLOW TEST:98.114 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":244,"skipped":4313,"failed":0} +S +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:01.182: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Apr 29 19:35:02.384: INFO: The status of Pod kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf is Running (Ready = true) +Apr 29 19:35:02.735: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:02.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1950" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":245,"skipped":4314,"failed":0} +SSSS +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:02.754: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename security-context +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Apr 29 19:35:02.816: INFO: Waiting up to 5m0s for pod "security-context-08b0380e-414e-48ad-89c4-e85e15181171" in namespace "security-context-4391" to be "Succeeded or Failed" +Apr 29 19:35:02.821: INFO: Pod "security-context-08b0380e-414e-48ad-89c4-e85e15181171": Phase="Pending", Reason="", readiness=false. Elapsed: 4.573274ms +Apr 29 19:35:04.828: INFO: Pod "security-context-08b0380e-414e-48ad-89c4-e85e15181171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012373269s +STEP: Saw pod success +Apr 29 19:35:04.828: INFO: Pod "security-context-08b0380e-414e-48ad-89c4-e85e15181171" satisfied condition "Succeeded or Failed" +Apr 29 19:35:04.835: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod security-context-08b0380e-414e-48ad-89c4-e85e15181171 container test-container: +STEP: delete the pod +Apr 29 19:35:04.871: INFO: Waiting for pod security-context-08b0380e-414e-48ad-89c4-e85e15181171 to disappear +Apr 29 19:35:04.877: INFO: Pod security-context-08b0380e-414e-48ad-89c4-e85e15181171 no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:04.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-4391" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":246,"skipped":4318,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:04.889: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-b8f2da20-7428-4f2d-822b-36c6dde3bdc6 +STEP: Creating a pod to test consume secrets +Apr 29 19:35:04.954: INFO: Waiting up to 5m0s for pod "pod-secrets-4feb4564-49e6-4087-ab07-b05270824ab3" in namespace "secrets-504" to be "Succeeded or Failed" +Apr 29 19:35:04.958: INFO: Pod "pod-secrets-4feb4564-49e6-4087-ab07-b05270824ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.465836ms +Apr 29 19:35:06.965: INFO: Pod "pod-secrets-4feb4564-49e6-4087-ab07-b05270824ab3": Phase="Running", Reason="", readiness=true. Elapsed: 2.010961234s +Apr 29 19:35:08.972: INFO: Pod "pod-secrets-4feb4564-49e6-4087-ab07-b05270824ab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018109124s +STEP: Saw pod success +Apr 29 19:35:08.972: INFO: Pod "pod-secrets-4feb4564-49e6-4087-ab07-b05270824ab3" satisfied condition "Succeeded or Failed" +Apr 29 19:35:08.976: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-secrets-4feb4564-49e6-4087-ab07-b05270824ab3 container secret-env-test: +STEP: delete the pod +Apr 29 19:35:08.995: INFO: Waiting for pod pod-secrets-4feb4564-49e6-4087-ab07-b05270824ab3 to disappear +Apr 29 19:35:08.999: INFO: Pod pod-secrets-4feb4564-49e6-4087-ab07-b05270824ab3 no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:08.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-504" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":247,"skipped":4326,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:09.012: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name projected-secret-test-0201230d-dc83-4db0-beb6-7947d3f3becf +STEP: Creating a pod to test consume secrets +Apr 29 19:35:09.063: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0e5ac635-33d2-49d2-9aca-6d1ae27daaf0" in namespace "projected-5527" to be "Succeeded or Failed" +Apr 29 19:35:09.069: INFO: Pod "pod-projected-secrets-0e5ac635-33d2-49d2-9aca-6d1ae27daaf0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111671ms +Apr 29 19:35:11.078: INFO: Pod "pod-projected-secrets-0e5ac635-33d2-49d2-9aca-6d1ae27daaf0": Phase="Running", Reason="", readiness=true. Elapsed: 2.015034026s +Apr 29 19:35:13.085: INFO: Pod "pod-projected-secrets-0e5ac635-33d2-49d2-9aca-6d1ae27daaf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021899895s +STEP: Saw pod success +Apr 29 19:35:13.085: INFO: Pod "pod-projected-secrets-0e5ac635-33d2-49d2-9aca-6d1ae27daaf0" satisfied condition "Succeeded or Failed" +Apr 29 19:35:13.089: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-secrets-0e5ac635-33d2-49d2-9aca-6d1ae27daaf0 container secret-volume-test: +STEP: delete the pod +Apr 29 19:35:13.134: INFO: Waiting for pod pod-projected-secrets-0e5ac635-33d2-49d2-9aca-6d1ae27daaf0 to disappear +Apr 29 19:35:13.139: INFO: Pod pod-projected-secrets-0e5ac635-33d2-49d2-9aca-6d1ae27daaf0 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:13.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5527" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":248,"skipped":4338,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:13.153: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:13.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-312" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":249,"skipped":4353,"failed":0} + +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:13.261: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename endpointslicemirroring +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: mirroring a new custom Endpoint +Apr 29 19:35:13.342: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint +Apr 29 19:35:15.360: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 +STEP: mirroring deletion of a custom Endpoint +Apr 29 19:35:17.374: INFO: Waiting for 0 EndpointSlices to exist, got 1 +[AfterEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:19.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-1484" for this suite. + +• [SLOW TEST:6.133 seconds] +[sig-network] EndpointSliceMirroring +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":250,"skipped":4353,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:19.395: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-8a408ac9-1791-495c-8da9-b609d1446762 +STEP: Creating a pod to test consume secrets +Apr 29 19:35:19.446: INFO: Waiting up to 5m0s for pod "pod-secrets-67a3d99c-d18d-42f9-b621-477e97508a19" in namespace "secrets-793" to be "Succeeded or Failed" +Apr 29 19:35:19.453: INFO: Pod "pod-secrets-67a3d99c-d18d-42f9-b621-477e97508a19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.940596ms +Apr 29 19:35:21.461: INFO: Pod "pod-secrets-67a3d99c-d18d-42f9-b621-477e97508a19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014325256s +STEP: Saw pod success +Apr 29 19:35:21.461: INFO: Pod "pod-secrets-67a3d99c-d18d-42f9-b621-477e97508a19" satisfied condition "Succeeded or Failed" +Apr 29 19:35:21.465: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-secrets-67a3d99c-d18d-42f9-b621-477e97508a19 container secret-volume-test: +STEP: delete the pod +Apr 29 19:35:21.486: INFO: Waiting for pod pod-secrets-67a3d99c-d18d-42f9-b621-477e97508a19 to disappear +Apr 29 19:35:21.489: INFO: Pod pod-secrets-67a3d99c-d18d-42f9-b621-477e97508a19 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:21.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-793" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":251,"skipped":4361,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:21.501: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:35:21.553: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc018427-b51d-49a7-b62a-892174d0326b" in namespace "downward-api-664" to be "Succeeded or Failed" +Apr 29 19:35:21.556: INFO: Pod "downwardapi-volume-bc018427-b51d-49a7-b62a-892174d0326b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.222227ms +Apr 29 19:35:23.563: INFO: Pod "downwardapi-volume-bc018427-b51d-49a7-b62a-892174d0326b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009712158s +STEP: Saw pod success +Apr 29 19:35:23.563: INFO: Pod "downwardapi-volume-bc018427-b51d-49a7-b62a-892174d0326b" satisfied condition "Succeeded or Failed" +Apr 29 19:35:23.568: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-bc018427-b51d-49a7-b62a-892174d0326b container client-container: +STEP: delete the pod +Apr 29 19:35:23.590: INFO: Waiting for pod downwardapi-volume-bc018427-b51d-49a7-b62a-892174d0326b to disappear +Apr 29 19:35:23.593: INFO: Pod downwardapi-volume-bc018427-b51d-49a7-b62a-892174d0326b no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:23.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-664" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":252,"skipped":4402,"failed":0} + +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:23.604: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:46.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-755" for this suite. + +• [SLOW TEST:22.843 seconds] +[sig-node] Container Runtime +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + blackbox test + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 + when starting a container that exits + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":253,"skipped":4402,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:46.450: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:35:46.509: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d5503114-cc31-4a03-a151-a724b655b686" in namespace "security-context-test-5702" to be "Succeeded or Failed" +Apr 29 19:35:46.514: INFO: Pod "busybox-readonly-false-d5503114-cc31-4a03-a151-a724b655b686": Phase="Pending", Reason="", readiness=false. Elapsed: 4.891589ms +Apr 29 19:35:48.520: INFO: Pod "busybox-readonly-false-d5503114-cc31-4a03-a151-a724b655b686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010402922s +Apr 29 19:35:48.520: INFO: Pod "busybox-readonly-false-d5503114-cc31-4a03-a151-a724b655b686" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:48.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-5702" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":254,"skipped":4412,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:48.533: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Apr 29 19:35:48.569: INFO: namespace kubectl-9191 +Apr 29 19:35:48.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9191 create -f -' +Apr 29 19:35:50.727: INFO: stderr: "" +Apr 29 19:35:50.727: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Apr 29 19:35:51.732: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:35:51.732: INFO: Found 0 / 1 +Apr 29 19:35:52.734: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:35:52.734: INFO: Found 1 / 1 +Apr 29 19:35:52.734: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Apr 29 19:35:52.739: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:35:52.739: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Apr 29 19:35:52.739: INFO: wait on agnhost-primary startup in kubectl-9191 +Apr 29 19:35:52.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9191 logs agnhost-primary-pzfb9 agnhost-primary' +Apr 29 19:35:52.824: INFO: stderr: "" +Apr 29 19:35:52.824: INFO: stdout: "Paused\n" +STEP: exposing RC +Apr 29 19:35:52.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9191 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Apr 29 19:35:52.921: INFO: stderr: "" +Apr 29 19:35:52.921: INFO: stdout: "service/rm2 exposed\n" +Apr 29 19:35:52.928: INFO: Service rm2 in namespace kubectl-9191 found. +STEP: exposing service +Apr 29 19:35:54.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9191 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Apr 29 19:35:55.041: INFO: stderr: "" +Apr 29 19:35:55.041: INFO: stdout: "service/rm3 exposed\n" +Apr 29 19:35:55.044: INFO: Service rm3 in namespace kubectl-9191 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:57.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9191" for this suite. + +• [SLOW TEST:8.536 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl expose + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1233 + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":346,"completed":255,"skipped":4452,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:57.070: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:35:57.109: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:35:58.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-2677" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":346,"completed":256,"skipped":4458,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:35:58.163: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Apr 29 19:35:58.221: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-867 1d477cfb-979f-4624-a3fc-2b8de971cf3c 757811 0 2022-04-29 19:35:58 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2022-04-29 19:35:58 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dgqmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgqmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Apr 29 19:35:58.230: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:36:00.238: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Apr 29 19:36:00.239: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-867 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:36:00.239: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Verifying customized DNS server is configured on pod... +Apr 29 19:36:00.365: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-867 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:36:00.366: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:36:00.481: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:36:00.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-867" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":257,"skipped":4477,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:36:00.518: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-841 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Apr 29 19:36:00.562: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Apr 29 19:36:00.593: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:36:02.600: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:36:04.598: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:36:06.599: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:36:08.598: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:36:10.599: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:36:12.599: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:36:14.601: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:36:16.598: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:36:18.600: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:36:20.600: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:36:22.599: INFO: The status of Pod netserver-0 is Running (Ready = true) +Apr 29 19:36:22.608: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Apr 29 19:36:24.657: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Apr 29 19:36:24.657: INFO: Going to poll 100.96.0.152 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Apr 29 19:36:24.661: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.0.152:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-841 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:36:24.661: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:36:24.771: INFO: Found all 1 expected endpoints: [netserver-0] +Apr 29 19:36:24.771: INFO: Going to poll 100.96.1.122 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Apr 29 19:36:24.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://100.96.1.122:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-841 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:36:24.776: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:36:24.882: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:36:24.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-841" for this suite. + +• [SLOW TEST:24.375 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":258,"skipped":4498,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:36:24.894: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:36:24.930: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-8870 +I0429 19:36:24.946646 25 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8870, replica count: 1 +I0429 19:36:25.997888 25 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0429 19:36:26.999001 25 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 19:36:27.120: INFO: Created: latency-svc-7tj5q +Apr 29 19:36:27.126: INFO: Got endpoints: latency-svc-7tj5q [26.634949ms] +Apr 29 19:36:27.142: INFO: Created: latency-svc-qb659 +Apr 29 19:36:27.154: INFO: Created: latency-svc-9bnvm +Apr 29 19:36:27.157: INFO: Got endpoints: latency-svc-qb659 [30.191302ms] +Apr 29 19:36:27.161: INFO: Got endpoints: latency-svc-9bnvm [33.728462ms] +Apr 29 19:36:27.163: INFO: Created: latency-svc-mpvg7 +Apr 29 19:36:27.165: INFO: Got endpoints: latency-svc-mpvg7 [38.322119ms] +Apr 29 19:36:27.176: INFO: Created: latency-svc-4m4dw +Apr 29 19:36:27.181: INFO: Got endpoints: latency-svc-4m4dw [53.84713ms] +Apr 29 19:36:27.183: INFO: Created: latency-svc-gkmzh +Apr 29 19:36:27.187: INFO: Got endpoints: latency-svc-gkmzh [59.95053ms] +Apr 29 19:36:27.210: INFO: Created: latency-svc-xjs27 +Apr 29 19:36:27.216: INFO: Got endpoints: latency-svc-xjs27 [89.037172ms] +Apr 29 19:36:27.216: INFO: Created: latency-svc-54ksj +Apr 29 19:36:27.223: INFO: Got endpoints: latency-svc-54ksj [95.834144ms] +Apr 29 19:36:27.230: INFO: Created: latency-svc-bfrg5 +Apr 29 19:36:27.243: INFO: Got endpoints: latency-svc-bfrg5 [116.387131ms] +Apr 29 19:36:27.257: INFO: Created: latency-svc-skdzz +Apr 29 19:36:27.262: INFO: Got endpoints: latency-svc-skdzz [135.016952ms] +Apr 29 19:36:27.273: INFO: Created: latency-svc-9m8lw +Apr 29 19:36:27.285: INFO: Created: latency-svc-l5mpd +Apr 29 19:36:27.286: INFO: Got endpoints: latency-svc-9m8lw [158.573578ms] +Apr 29 19:36:27.293: INFO: Got endpoints: latency-svc-l5mpd [166.132683ms] +Apr 29 19:36:27.295: INFO: Created: latency-svc-t4mns +Apr 29 19:36:27.302: INFO: Got endpoints: latency-svc-t4mns [175.481878ms] +Apr 29 19:36:27.303: INFO: Created: latency-svc-vtrj7 +Apr 29 19:36:27.310: INFO: Got endpoints: latency-svc-vtrj7 [182.605383ms] +Apr 29 19:36:27.338: INFO: Created: latency-svc-r7ngt +Apr 29 19:36:27.345: INFO: Created: latency-svc-cthtk +Apr 29 19:36:27.346: INFO: Got endpoints: latency-svc-r7ngt [218.627194ms] +Apr 29 19:36:27.361: INFO: Got endpoints: latency-svc-cthtk [234.470581ms] +Apr 29 19:36:27.366: INFO: Created: latency-svc-sjm94 +Apr 29 19:36:27.370: INFO: Got endpoints: latency-svc-sjm94 [213.222099ms] +Apr 29 19:36:27.371: INFO: Created: latency-svc-s42lv +Apr 29 19:36:27.383: INFO: Got endpoints: latency-svc-s42lv [221.358522ms] +Apr 29 19:36:27.383: INFO: Created: latency-svc-2xfs7 +Apr 29 19:36:27.393: INFO: Got endpoints: latency-svc-2xfs7 [227.754801ms] +Apr 29 19:36:27.402: INFO: Created: latency-svc-gmjnh +Apr 29 19:36:27.410: INFO: Created: latency-svc-kf5rl +Apr 29 19:36:27.410: INFO: Got endpoints: latency-svc-gmjnh [229.128431ms] +Apr 29 19:36:27.419: INFO: Created: latency-svc-m49zr +Apr 29 19:36:27.421: INFO: Got endpoints: latency-svc-kf5rl [233.703192ms] +Apr 29 19:36:27.428: INFO: Got endpoints: latency-svc-m49zr [211.6584ms] +Apr 29 19:36:27.430: INFO: Created: latency-svc-6s2bs +Apr 29 19:36:27.444: INFO: Got endpoints: latency-svc-6s2bs [220.399032ms] +Apr 29 19:36:27.448: INFO: Created: latency-svc-4hgdz +Apr 29 19:36:27.459: INFO: Got endpoints: latency-svc-4hgdz [215.840349ms] +Apr 29 19:36:27.463: INFO: Created: latency-svc-f5dlw +Apr 29 19:36:27.469: INFO: Got endpoints: latency-svc-f5dlw [207.1623ms] +Apr 29 19:36:27.485: INFO: Created: latency-svc-bjcth +Apr 29 19:36:27.486: INFO: Got endpoints: latency-svc-bjcth [200.209757ms] +Apr 29 19:36:27.490: INFO: Created: latency-svc-22npc +Apr 29 19:36:27.499: INFO: Got endpoints: latency-svc-22npc [205.384212ms] +Apr 29 19:36:27.503: INFO: Created: latency-svc-t54f4 +Apr 29 19:36:27.515: INFO: Got endpoints: latency-svc-t54f4 [213.086285ms] +Apr 29 19:36:27.516: INFO: Created: latency-svc-6fcpt +Apr 29 19:36:27.526: INFO: Got endpoints: latency-svc-6fcpt [215.899102ms] +Apr 29 19:36:27.530: INFO: Created: latency-svc-5llmt +Apr 29 19:36:27.540: INFO: Got endpoints: latency-svc-5llmt [193.804664ms] +Apr 29 19:36:27.558: INFO: Created: latency-svc-q794f +Apr 29 19:36:27.566: INFO: Got endpoints: latency-svc-q794f [205.096163ms] +Apr 29 19:36:27.569: INFO: Created: latency-svc-b7rqp +Apr 29 19:36:27.577: INFO: Created: latency-svc-mcjz6 +Apr 29 19:36:27.578: INFO: Got endpoints: latency-svc-b7rqp [207.184596ms] +Apr 29 19:36:27.585: INFO: Got endpoints: latency-svc-mcjz6 [202.708457ms] +Apr 29 19:36:27.587: INFO: Created: latency-svc-84h9l +Apr 29 19:36:27.597: INFO: Got endpoints: latency-svc-84h9l [204.125976ms] +Apr 29 19:36:27.598: INFO: Created: latency-svc-xk68z +Apr 29 19:36:27.611: INFO: Got endpoints: latency-svc-xk68z [201.097216ms] +Apr 29 19:36:27.633: INFO: Created: latency-svc-nhpf5 +Apr 29 19:36:27.647: INFO: Got endpoints: latency-svc-nhpf5 [225.680988ms] +Apr 29 19:36:27.648: INFO: Created: latency-svc-bqtkn +Apr 29 19:36:27.677: INFO: Got endpoints: latency-svc-bqtkn [249.048769ms] +Apr 29 19:36:27.678: INFO: Created: latency-svc-zp5dq +Apr 29 19:36:27.687: INFO: Got endpoints: latency-svc-zp5dq [243.75688ms] +Apr 29 19:36:27.693: INFO: Created: latency-svc-xfpl9 +Apr 29 19:36:27.703: INFO: Created: latency-svc-jd5km +Apr 29 19:36:27.705: INFO: Got endpoints: latency-svc-xfpl9 [244.983081ms] +Apr 29 19:36:27.711: INFO: Created: latency-svc-5xkd5 +Apr 29 19:36:27.716: INFO: Got endpoints: latency-svc-jd5km [246.859139ms] +Apr 29 19:36:27.719: INFO: Got endpoints: latency-svc-5xkd5 [232.664247ms] +Apr 29 19:36:27.720: INFO: Created: latency-svc-crxdg +Apr 29 19:36:27.727: INFO: Got endpoints: latency-svc-crxdg [228.123242ms] +Apr 29 19:36:27.728: INFO: Created: latency-svc-kv26d +Apr 29 19:36:27.738: INFO: Created: latency-svc-8ktxq +Apr 29 19:36:27.746: INFO: Created: latency-svc-28trs +Apr 29 19:36:27.752: INFO: Created: latency-svc-shjrh +Apr 29 19:36:27.763: INFO: Created: latency-svc-gfzmf +Apr 29 19:36:27.769: INFO: Created: latency-svc-xjjmr +Apr 29 19:36:27.805: INFO: Created: latency-svc-2jgcn +Apr 29 19:36:27.807: INFO: Got endpoints: latency-svc-kv26d [291.005163ms] +Apr 29 19:36:27.815: INFO: Created: latency-svc-p84zs +Apr 29 19:36:27.827: INFO: Got endpoints: latency-svc-8ktxq [301.118791ms] +Apr 29 19:36:27.828: INFO: Created: latency-svc-pk8h6 +Apr 29 19:36:27.838: INFO: Created: latency-svc-7lkbq +Apr 29 19:36:27.844: INFO: Created: latency-svc-7rb9n +Apr 29 19:36:27.852: INFO: Created: latency-svc-tlpzn +Apr 29 19:36:27.859: INFO: Created: latency-svc-469jt +Apr 29 19:36:27.869: INFO: Created: latency-svc-w7ksb +Apr 29 19:36:27.881: INFO: Created: latency-svc-f66j9 +Apr 29 19:36:27.881: INFO: Got endpoints: latency-svc-28trs [341.712798ms] +Apr 29 19:36:27.891: INFO: Created: latency-svc-w46mb +Apr 29 19:36:27.899: INFO: Created: latency-svc-7r8tt +Apr 29 19:36:27.924: INFO: Created: latency-svc-76c57 +Apr 29 19:36:27.930: INFO: Got endpoints: latency-svc-shjrh [364.00189ms] +Apr 29 19:36:27.945: INFO: Created: latency-svc-p2smg +Apr 29 19:36:27.978: INFO: Got endpoints: latency-svc-gfzmf [400.291675ms] +Apr 29 19:36:27.992: INFO: Created: latency-svc-wdxf7 +Apr 29 19:36:28.033: INFO: Got endpoints: latency-svc-xjjmr [447.918876ms] +Apr 29 19:36:28.046: INFO: Created: latency-svc-49g5b +Apr 29 19:36:28.076: INFO: Got endpoints: latency-svc-2jgcn [478.742804ms] +Apr 29 19:36:28.092: INFO: Created: latency-svc-9wjrn +Apr 29 19:36:28.126: INFO: Got endpoints: latency-svc-p84zs [515.106265ms] +Apr 29 19:36:28.143: INFO: Created: latency-svc-b9tjq +Apr 29 19:36:28.177: INFO: Got endpoints: latency-svc-pk8h6 [530.544119ms] +Apr 29 19:36:28.198: INFO: Created: latency-svc-qflb4 +Apr 29 19:36:28.229: INFO: Got endpoints: latency-svc-7lkbq [551.824209ms] +Apr 29 19:36:28.240: INFO: Created: latency-svc-jslwf +Apr 29 19:36:28.275: INFO: Got endpoints: latency-svc-7rb9n [587.632805ms] +Apr 29 19:36:28.288: INFO: Created: latency-svc-gh7pr +Apr 29 19:36:28.328: INFO: Got endpoints: latency-svc-tlpzn [623.309132ms] +Apr 29 19:36:28.341: INFO: Created: latency-svc-m4fwk +Apr 29 19:36:28.376: INFO: Got endpoints: latency-svc-469jt [660.008414ms] +Apr 29 19:36:28.403: INFO: Created: latency-svc-wvh5r +Apr 29 19:36:28.431: INFO: Got endpoints: latency-svc-w7ksb [711.891755ms] +Apr 29 19:36:28.441: INFO: Created: latency-svc-cs4xd +Apr 29 19:36:28.477: INFO: Got endpoints: latency-svc-f66j9 [749.799556ms] +Apr 29 19:36:28.489: INFO: Created: latency-svc-tpr5f +Apr 29 19:36:28.526: INFO: Got endpoints: latency-svc-w46mb [719.07895ms] +Apr 29 19:36:28.539: INFO: Created: latency-svc-wspcd +Apr 29 19:36:28.576: INFO: Got endpoints: latency-svc-7r8tt [748.97976ms] +Apr 29 19:36:28.588: INFO: Created: latency-svc-44s6n +Apr 29 19:36:28.626: INFO: Got endpoints: latency-svc-76c57 [744.374733ms] +Apr 29 19:36:28.639: INFO: Created: latency-svc-d68lq +Apr 29 19:36:28.676: INFO: Got endpoints: latency-svc-p2smg [745.583478ms] +Apr 29 19:36:28.688: INFO: Created: latency-svc-t8l6f +Apr 29 19:36:28.728: INFO: Got endpoints: latency-svc-wdxf7 [749.447996ms] +Apr 29 19:36:28.741: INFO: Created: latency-svc-ds7nb +Apr 29 19:36:28.779: INFO: Got endpoints: latency-svc-49g5b [745.577072ms] +Apr 29 19:36:28.790: INFO: Created: latency-svc-kswwq +Apr 29 19:36:28.831: INFO: Got endpoints: latency-svc-9wjrn [754.656453ms] +Apr 29 19:36:28.843: INFO: Created: latency-svc-vdpx8 +Apr 29 19:36:28.876: INFO: Got endpoints: latency-svc-b9tjq [749.081659ms] +Apr 29 19:36:28.891: INFO: Created: latency-svc-45mjd +Apr 29 19:36:28.926: INFO: Got endpoints: latency-svc-qflb4 [748.675129ms] +Apr 29 19:36:28.937: INFO: Created: latency-svc-6xwv5 +Apr 29 19:36:28.982: INFO: Got endpoints: latency-svc-jslwf [752.973575ms] +Apr 29 19:36:28.996: INFO: Created: latency-svc-xptl8 +Apr 29 19:36:29.026: INFO: Got endpoints: latency-svc-gh7pr [750.571054ms] +Apr 29 19:36:29.040: INFO: Created: latency-svc-ctncp +Apr 29 19:36:29.078: INFO: Got endpoints: latency-svc-m4fwk [750.374941ms] +Apr 29 19:36:29.104: INFO: Created: latency-svc-sqdmv +Apr 29 19:36:29.129: INFO: Got endpoints: latency-svc-wvh5r [752.655444ms] +Apr 29 19:36:29.149: INFO: Created: latency-svc-dbvxg +Apr 29 19:36:29.175: INFO: Got endpoints: latency-svc-cs4xd [743.865086ms] +Apr 29 19:36:29.190: INFO: Created: latency-svc-qq2v7 +Apr 29 19:36:29.226: INFO: Got endpoints: latency-svc-tpr5f [748.376921ms] +Apr 29 19:36:29.239: INFO: Created: latency-svc-4hl9w +Apr 29 19:36:29.276: INFO: Got endpoints: latency-svc-wspcd [750.093709ms] +Apr 29 19:36:29.288: INFO: Created: latency-svc-7rq45 +Apr 29 19:36:29.326: INFO: Got endpoints: latency-svc-44s6n [750.003753ms] +Apr 29 19:36:29.339: INFO: Created: latency-svc-z4tck +Apr 29 19:36:29.378: INFO: Got endpoints: latency-svc-d68lq [752.083713ms] +Apr 29 19:36:29.391: INFO: Created: latency-svc-n764f +Apr 29 19:36:29.425: INFO: Got endpoints: latency-svc-t8l6f [749.034542ms] +Apr 29 19:36:29.438: INFO: Created: latency-svc-kh45j +Apr 29 19:36:29.480: INFO: Got endpoints: latency-svc-ds7nb [752.101529ms] +Apr 29 19:36:29.493: INFO: Created: latency-svc-bh574 +Apr 29 19:36:29.534: INFO: Got endpoints: latency-svc-kswwq [754.622378ms] +Apr 29 19:36:29.546: INFO: Created: latency-svc-mgx7m +Apr 29 19:36:29.578: INFO: Got endpoints: latency-svc-vdpx8 [747.039477ms] +Apr 29 19:36:29.589: INFO: Created: latency-svc-d4sgc +Apr 29 19:36:29.626: INFO: Got endpoints: latency-svc-45mjd [750.427468ms] +Apr 29 19:36:29.639: INFO: Created: latency-svc-7g4gv +Apr 29 19:36:29.680: INFO: Got endpoints: latency-svc-6xwv5 [753.48592ms] +Apr 29 19:36:29.693: INFO: Created: latency-svc-jdhgn +Apr 29 19:36:29.726: INFO: Got endpoints: latency-svc-xptl8 [744.168807ms] +Apr 29 19:36:29.738: INFO: Created: latency-svc-zkxqs +Apr 29 19:36:29.776: INFO: Got endpoints: latency-svc-ctncp [750.30178ms] +Apr 29 19:36:29.789: INFO: Created: latency-svc-lbgdd +Apr 29 19:36:29.826: INFO: Got endpoints: latency-svc-sqdmv [747.485297ms] +Apr 29 19:36:29.838: INFO: Created: latency-svc-zpv2n +Apr 29 19:36:29.876: INFO: Got endpoints: latency-svc-dbvxg [747.11967ms] +Apr 29 19:36:29.896: INFO: Created: latency-svc-58lrq +Apr 29 19:36:29.929: INFO: Got endpoints: latency-svc-qq2v7 [754.075201ms] +Apr 29 19:36:29.939: INFO: Created: latency-svc-lbg8l +Apr 29 19:36:29.975: INFO: Got endpoints: latency-svc-4hl9w [749.386679ms] +Apr 29 19:36:29.986: INFO: Created: latency-svc-thl89 +Apr 29 19:36:30.025: INFO: Got endpoints: latency-svc-7rq45 [749.199898ms] +Apr 29 19:36:30.041: INFO: Created: latency-svc-67ph7 +Apr 29 19:36:30.080: INFO: Got endpoints: latency-svc-z4tck [753.128438ms] +Apr 29 19:36:30.097: INFO: Created: latency-svc-6xhvf +Apr 29 19:36:30.129: INFO: Got endpoints: latency-svc-n764f [750.694177ms] +Apr 29 19:36:30.152: INFO: Created: latency-svc-2g2fw +Apr 29 19:36:30.179: INFO: Got endpoints: latency-svc-kh45j [753.29612ms] +Apr 29 19:36:30.241: INFO: Created: latency-svc-s4lpb +Apr 29 19:36:30.244: INFO: Got endpoints: latency-svc-bh574 [764.432448ms] +Apr 29 19:36:30.286: INFO: Got endpoints: latency-svc-mgx7m [752.489104ms] +Apr 29 19:36:30.286: INFO: Created: latency-svc-xxxmv +Apr 29 19:36:30.310: INFO: Created: latency-svc-55snt +Apr 29 19:36:30.335: INFO: Got endpoints: latency-svc-d4sgc [756.44491ms] +Apr 29 19:36:30.354: INFO: Created: latency-svc-q49sw +Apr 29 19:36:30.380: INFO: Got endpoints: latency-svc-7g4gv [753.515387ms] +Apr 29 19:36:30.394: INFO: Created: latency-svc-x8g6s +Apr 29 19:36:30.427: INFO: Got endpoints: latency-svc-jdhgn [747.587667ms] +Apr 29 19:36:30.440: INFO: Created: latency-svc-hzf6s +Apr 29 19:36:30.478: INFO: Got endpoints: latency-svc-zkxqs [751.49741ms] +Apr 29 19:36:30.496: INFO: Created: latency-svc-54jqx +Apr 29 19:36:30.530: INFO: Got endpoints: latency-svc-lbgdd [753.938097ms] +Apr 29 19:36:30.541: INFO: Created: latency-svc-c64r7 +Apr 29 19:36:30.577: INFO: Got endpoints: latency-svc-zpv2n [751.242023ms] +Apr 29 19:36:30.593: INFO: Created: latency-svc-djn5z +Apr 29 19:36:30.631: INFO: Got endpoints: latency-svc-58lrq [754.785044ms] +Apr 29 19:36:30.641: INFO: Created: latency-svc-c2qgk +Apr 29 19:36:30.678: INFO: Got endpoints: latency-svc-lbg8l [749.095236ms] +Apr 29 19:36:30.689: INFO: Created: latency-svc-zjwlq +Apr 29 19:36:30.733: INFO: Got endpoints: latency-svc-thl89 [758.192225ms] +Apr 29 19:36:30.750: INFO: Created: latency-svc-vgwvt +Apr 29 19:36:30.775: INFO: Got endpoints: latency-svc-67ph7 [750.324697ms] +Apr 29 19:36:30.786: INFO: Created: latency-svc-wnrcm +Apr 29 19:36:30.827: INFO: Got endpoints: latency-svc-6xhvf [747.375542ms] +Apr 29 19:36:30.839: INFO: Created: latency-svc-54x6g +Apr 29 19:36:30.874: INFO: Got endpoints: latency-svc-2g2fw [744.833866ms] +Apr 29 19:36:30.928: INFO: Created: latency-svc-rv94k +Apr 29 19:36:30.931: INFO: Got endpoints: latency-svc-s4lpb [752.322731ms] +Apr 29 19:36:30.945: INFO: Created: latency-svc-5v6lb +Apr 29 19:36:30.976: INFO: Got endpoints: latency-svc-xxxmv [731.340726ms] +Apr 29 19:36:31.073: INFO: Created: latency-svc-jtbn8 +Apr 29 19:36:31.079: INFO: Got endpoints: latency-svc-55snt [792.896971ms] +Apr 29 19:36:31.086: INFO: Got endpoints: latency-svc-q49sw [751.50614ms] +Apr 29 19:36:31.146: INFO: Created: latency-svc-g4pm6 +Apr 29 19:36:31.146: INFO: Got endpoints: latency-svc-x8g6s [766.361357ms] +Apr 29 19:36:31.152: INFO: Created: latency-svc-4mkxx +Apr 29 19:36:31.173: INFO: Created: latency-svc-mpbd2 +Apr 29 19:36:31.176: INFO: Got endpoints: latency-svc-hzf6s [748.67531ms] +Apr 29 19:36:31.198: INFO: Created: latency-svc-7gwhr +Apr 29 19:36:31.231: INFO: Got endpoints: latency-svc-54jqx [753.055911ms] +Apr 29 19:36:31.249: INFO: Created: latency-svc-qbf42 +Apr 29 19:36:31.277: INFO: Got endpoints: latency-svc-c64r7 [746.699888ms] +Apr 29 19:36:31.288: INFO: Created: latency-svc-9kvqj +Apr 29 19:36:31.326: INFO: Got endpoints: latency-svc-djn5z [748.436073ms] +Apr 29 19:36:31.338: INFO: Created: latency-svc-wclpd +Apr 29 19:36:31.377: INFO: Got endpoints: latency-svc-c2qgk [746.471955ms] +Apr 29 19:36:31.408: INFO: Created: latency-svc-d4b6s +Apr 29 19:36:31.425: INFO: Got endpoints: latency-svc-zjwlq [747.193065ms] +Apr 29 19:36:31.435: INFO: Created: latency-svc-jxsh9 +Apr 29 19:36:31.478: INFO: Got endpoints: latency-svc-vgwvt [743.436843ms] +Apr 29 19:36:31.490: INFO: Created: latency-svc-lqxnk +Apr 29 19:36:31.526: INFO: Got endpoints: latency-svc-wnrcm [750.240603ms] +Apr 29 19:36:31.543: INFO: Created: latency-svc-kbl8n +Apr 29 19:36:31.576: INFO: Got endpoints: latency-svc-54x6g [748.281787ms] +Apr 29 19:36:31.603: INFO: Created: latency-svc-c74j5 +Apr 29 19:36:31.634: INFO: Got endpoints: latency-svc-rv94k [760.017659ms] +Apr 29 19:36:31.654: INFO: Created: latency-svc-gq9jg +Apr 29 19:36:31.674: INFO: Got endpoints: latency-svc-5v6lb [742.929325ms] +Apr 29 19:36:31.690: INFO: Created: latency-svc-rqbq5 +Apr 29 19:36:31.731: INFO: Got endpoints: latency-svc-jtbn8 [755.062371ms] +Apr 29 19:36:31.749: INFO: Created: latency-svc-xzx2r +Apr 29 19:36:31.777: INFO: Got endpoints: latency-svc-g4pm6 [697.773957ms] +Apr 29 19:36:31.799: INFO: Created: latency-svc-xjvhs +Apr 29 19:36:31.829: INFO: Got endpoints: latency-svc-4mkxx [742.849293ms] +Apr 29 19:36:31.840: INFO: Created: latency-svc-jwwz2 +Apr 29 19:36:31.876: INFO: Got endpoints: latency-svc-mpbd2 [729.725954ms] +Apr 29 19:36:31.888: INFO: Created: latency-svc-cqc26 +Apr 29 19:36:31.929: INFO: Got endpoints: latency-svc-7gwhr [753.133708ms] +Apr 29 19:36:31.949: INFO: Created: latency-svc-fzj7n +Apr 29 19:36:31.976: INFO: Got endpoints: latency-svc-qbf42 [745.140643ms] +Apr 29 19:36:31.989: INFO: Created: latency-svc-dmvdq +Apr 29 19:36:32.026: INFO: Got endpoints: latency-svc-9kvqj [749.505474ms] +Apr 29 19:36:32.050: INFO: Created: latency-svc-qxrvg +Apr 29 19:36:32.079: INFO: Got endpoints: latency-svc-wclpd [753.052089ms] +Apr 29 19:36:32.091: INFO: Created: latency-svc-nsrdw +Apr 29 19:36:32.126: INFO: Got endpoints: latency-svc-d4b6s [748.206058ms] +Apr 29 19:36:32.139: INFO: Created: latency-svc-z9cj7 +Apr 29 19:36:32.176: INFO: Got endpoints: latency-svc-jxsh9 [750.166724ms] +Apr 29 19:36:32.195: INFO: Created: latency-svc-xbvch +Apr 29 19:36:32.226: INFO: Got endpoints: latency-svc-lqxnk [748.466688ms] +Apr 29 19:36:32.245: INFO: Created: latency-svc-vssbv +Apr 29 19:36:32.275: INFO: Got endpoints: latency-svc-kbl8n [748.912363ms] +Apr 29 19:36:32.291: INFO: Created: latency-svc-pm8lz +Apr 29 19:36:32.326: INFO: Got endpoints: latency-svc-c74j5 [750.440802ms] +Apr 29 19:36:32.342: INFO: Created: latency-svc-fdxkn +Apr 29 19:36:32.376: INFO: Got endpoints: latency-svc-gq9jg [742.24922ms] +Apr 29 19:36:32.402: INFO: Created: latency-svc-l2wm6 +Apr 29 19:36:32.431: INFO: Got endpoints: latency-svc-rqbq5 [757.166215ms] +Apr 29 19:36:32.448: INFO: Created: latency-svc-xbwdc +Apr 29 19:36:32.484: INFO: Got endpoints: latency-svc-xzx2r [752.666936ms] +Apr 29 19:36:32.498: INFO: Created: latency-svc-q854p +Apr 29 19:36:32.529: INFO: Got endpoints: latency-svc-xjvhs [751.449793ms] +Apr 29 19:36:32.544: INFO: Created: latency-svc-b2nv8 +Apr 29 19:36:32.577: INFO: Got endpoints: latency-svc-jwwz2 [747.275145ms] +Apr 29 19:36:32.593: INFO: Created: latency-svc-jnm7m +Apr 29 19:36:32.627: INFO: Got endpoints: latency-svc-cqc26 [750.889582ms] +Apr 29 19:36:32.646: INFO: Created: latency-svc-hf57n +Apr 29 19:36:32.680: INFO: Got endpoints: latency-svc-fzj7n [750.176958ms] +Apr 29 19:36:32.700: INFO: Created: latency-svc-xppm9 +Apr 29 19:36:32.725: INFO: Got endpoints: latency-svc-dmvdq [748.748708ms] +Apr 29 19:36:32.738: INFO: Created: latency-svc-rr52w +Apr 29 19:36:32.780: INFO: Got endpoints: latency-svc-qxrvg [753.066423ms] +Apr 29 19:36:32.797: INFO: Created: latency-svc-pwsmq +Apr 29 19:36:32.826: INFO: Got endpoints: latency-svc-nsrdw [747.112491ms] +Apr 29 19:36:32.849: INFO: Created: latency-svc-dqrpj +Apr 29 19:36:32.878: INFO: Got endpoints: latency-svc-z9cj7 [751.926947ms] +Apr 29 19:36:32.891: INFO: Created: latency-svc-64stj +Apr 29 19:36:32.931: INFO: Got endpoints: latency-svc-xbvch [755.36535ms] +Apr 29 19:36:32.951: INFO: Created: latency-svc-ndllq +Apr 29 19:36:32.975: INFO: Got endpoints: latency-svc-vssbv [748.531921ms] +Apr 29 19:36:32.988: INFO: Created: latency-svc-8f67z +Apr 29 19:36:33.027: INFO: Got endpoints: latency-svc-pm8lz [751.798533ms] +Apr 29 19:36:33.048: INFO: Created: latency-svc-7fh9p +Apr 29 19:36:33.076: INFO: Got endpoints: latency-svc-fdxkn [750.111081ms] +Apr 29 19:36:33.086: INFO: Created: latency-svc-xfh2l +Apr 29 19:36:33.126: INFO: Got endpoints: latency-svc-l2wm6 [749.899827ms] +Apr 29 19:36:33.153: INFO: Created: latency-svc-2x76l +Apr 29 19:36:33.179: INFO: Got endpoints: latency-svc-xbwdc [748.107959ms] +Apr 29 19:36:36.768: INFO: Got endpoints: latency-svc-b2nv8 [4.238827296s] +Apr 29 19:36:36.768: INFO: Got endpoints: latency-svc-q854p [4.283899206s] +Apr 29 19:36:36.769: INFO: Created: latency-svc-d84cj +Apr 29 19:36:36.770: INFO: Got endpoints: latency-svc-hf57n [4.142708028s] +Apr 29 19:36:36.770: INFO: Got endpoints: latency-svc-jnm7m [4.193030505s] +Apr 29 19:36:36.770: INFO: Got endpoints: latency-svc-pwsmq [3.990127687s] +Apr 29 19:36:36.770: INFO: Got endpoints: latency-svc-xppm9 [4.090034619s] +Apr 29 19:36:36.771: INFO: Got endpoints: latency-svc-dqrpj [3.945074828s] +Apr 29 19:36:36.771: INFO: Got endpoints: latency-svc-ndllq [3.840242955s] +Apr 29 19:36:36.771: INFO: Got endpoints: latency-svc-rr52w [4.046211383s] +Apr 29 19:36:36.772: INFO: Got endpoints: latency-svc-64stj [3.894402897s] +Apr 29 19:36:36.937: INFO: Created: latency-svc-ps7gd +Apr 29 19:36:36.949: INFO: Got endpoints: latency-svc-7fh9p [3.922290611s] +Apr 29 19:36:36.949: INFO: Got endpoints: latency-svc-xfh2l [3.872446833s] +Apr 29 19:36:36.949: INFO: Got endpoints: latency-svc-d84cj [3.769386029s] +Apr 29 19:36:36.949: INFO: Got endpoints: latency-svc-8f67z [3.974016866s] +Apr 29 19:36:36.951: INFO: Got endpoints: latency-svc-2x76l [3.824593452s] +Apr 29 19:36:36.952: INFO: Got endpoints: latency-svc-ps7gd [184.249584ms] +Apr 29 19:36:36.964: INFO: Created: latency-svc-258vd +Apr 29 19:36:37.132: INFO: Got endpoints: latency-svc-258vd [359.404205ms] +Apr 29 19:36:37.152: INFO: Created: latency-svc-msg8p +Apr 29 19:36:37.179: INFO: Got endpoints: latency-svc-msg8p [411.510358ms] +Apr 29 19:36:37.250: INFO: Created: latency-svc-kxcc8 +Apr 29 19:36:37.312: INFO: Got endpoints: latency-svc-kxcc8 [542.422516ms] +Apr 29 19:36:37.318: INFO: Created: latency-svc-mjffj +Apr 29 19:36:37.326: INFO: Created: latency-svc-lb25f +Apr 29 19:36:37.327: INFO: Got endpoints: latency-svc-mjffj [556.940211ms] +Apr 29 19:36:37.340: INFO: Got endpoints: latency-svc-lb25f [569.437436ms] +Apr 29 19:36:37.343: INFO: Created: latency-svc-jtwgq +Apr 29 19:36:37.351: INFO: Got endpoints: latency-svc-jtwgq [580.944089ms] +Apr 29 19:36:37.354: INFO: Created: latency-svc-x4z4k +Apr 29 19:36:37.357: INFO: Got endpoints: latency-svc-x4z4k [585.798642ms] +Apr 29 19:36:37.366: INFO: Created: latency-svc-cscnm +Apr 29 19:36:37.372: INFO: Got endpoints: latency-svc-cscnm [600.730479ms] +Apr 29 19:36:37.400: INFO: Created: latency-svc-m9ps4 +Apr 29 19:36:37.401: INFO: Created: latency-svc-96rb5 +Apr 29 19:36:37.401: INFO: Got endpoints: latency-svc-96rb5 [629.386506ms] +Apr 29 19:36:37.406: INFO: Created: latency-svc-4wqjg +Apr 29 19:36:37.459: INFO: Got endpoints: latency-svc-4wqjg [509.528499ms] +Apr 29 19:36:37.459: INFO: Got endpoints: latency-svc-m9ps4 [509.932855ms] +Apr 29 19:36:37.466: INFO: Created: latency-svc-wb2q2 +Apr 29 19:36:37.479: INFO: Got endpoints: latency-svc-wb2q2 [529.827111ms] +Apr 29 19:36:37.487: INFO: Created: latency-svc-bjp7s +Apr 29 19:36:37.493: INFO: Created: latency-svc-pb6sc +Apr 29 19:36:37.529: INFO: Got endpoints: latency-svc-bjp7s [579.697338ms] +Apr 29 19:36:37.533: INFO: Created: latency-svc-5t4sm +Apr 29 19:36:37.537: INFO: Got endpoints: latency-svc-pb6sc [586.070605ms] +Apr 29 19:36:37.574: INFO: Got endpoints: latency-svc-5t4sm [621.335993ms] +Apr 29 19:36:37.587: INFO: Created: latency-svc-m4c7f +Apr 29 19:36:37.610: INFO: Got endpoints: latency-svc-m4c7f [477.688403ms] +Apr 29 19:36:37.615: INFO: Created: latency-svc-w5c8m +Apr 29 19:36:37.632: INFO: Got endpoints: latency-svc-w5c8m [452.727736ms] +Apr 29 19:36:37.637: INFO: Created: latency-svc-6hpcr +Apr 29 19:36:37.648: INFO: Got endpoints: latency-svc-6hpcr [335.589869ms] +Apr 29 19:36:37.654: INFO: Created: latency-svc-xxtpg +Apr 29 19:36:37.696: INFO: Got endpoints: latency-svc-xxtpg [368.782432ms] +Apr 29 19:36:37.697: INFO: Created: latency-svc-c2jvx +Apr 29 19:36:37.724: INFO: Got endpoints: latency-svc-c2jvx [384.568784ms] +Apr 29 19:36:37.778: INFO: Created: latency-svc-kd25x +Apr 29 19:36:37.803: INFO: Created: latency-svc-nnmj6 +Apr 29 19:36:37.819: INFO: Got endpoints: latency-svc-kd25x [468.285302ms] +Apr 29 19:36:37.826: INFO: Got endpoints: latency-svc-nnmj6 [468.598673ms] +Apr 29 19:36:37.829: INFO: Created: latency-svc-4lgs7 +Apr 29 19:36:37.846: INFO: Got endpoints: latency-svc-4lgs7 [474.119745ms] +Apr 29 19:36:37.848: INFO: Created: latency-svc-6q5fq +Apr 29 19:36:37.870: INFO: Got endpoints: latency-svc-6q5fq [468.870344ms] +Apr 29 19:36:37.872: INFO: Created: latency-svc-8rgpb +Apr 29 19:36:37.896: INFO: Got endpoints: latency-svc-8rgpb [437.197904ms] +Apr 29 19:36:37.902: INFO: Created: latency-svc-lxv4f +Apr 29 19:36:37.915: INFO: Got endpoints: latency-svc-lxv4f [455.569933ms] +Apr 29 19:36:37.937: INFO: Created: latency-svc-bwc9c +Apr 29 19:36:37.956: INFO: Got endpoints: latency-svc-bwc9c [476.803195ms] +Apr 29 19:36:37.959: INFO: Created: latency-svc-gj64r +Apr 29 19:36:37.973: INFO: Got endpoints: latency-svc-gj64r [444.304823ms] +Apr 29 19:36:37.981: INFO: Created: latency-svc-4kgk6 +Apr 29 19:36:37.996: INFO: Got endpoints: latency-svc-4kgk6 [458.848668ms] +Apr 29 19:36:37.997: INFO: Created: latency-svc-qg2dj +Apr 29 19:36:38.003: INFO: Got endpoints: latency-svc-qg2dj [429.065595ms] +Apr 29 19:36:38.010: INFO: Created: latency-svc-8cj9p +Apr 29 19:36:38.023: INFO: Got endpoints: latency-svc-8cj9p [413.321652ms] +Apr 29 19:36:38.024: INFO: Created: latency-svc-4s959 +Apr 29 19:36:38.032: INFO: Got endpoints: latency-svc-4s959 [399.597037ms] +Apr 29 19:36:38.035: INFO: Created: latency-svc-d7rzl +Apr 29 19:36:38.091: INFO: Got endpoints: latency-svc-d7rzl [442.368722ms] +Apr 29 19:36:38.107: INFO: Created: latency-svc-2rgcc +Apr 29 19:36:38.111: INFO: Got endpoints: latency-svc-2rgcc [415.882556ms] +Apr 29 19:36:38.112: INFO: Latencies: [30.191302ms 33.728462ms 38.322119ms 53.84713ms 59.95053ms 89.037172ms 95.834144ms 116.387131ms 135.016952ms 158.573578ms 166.132683ms 175.481878ms 182.605383ms 184.249584ms 193.804664ms 200.209757ms 201.097216ms 202.708457ms 204.125976ms 205.096163ms 205.384212ms 207.1623ms 207.184596ms 211.6584ms 213.086285ms 213.222099ms 215.840349ms 215.899102ms 218.627194ms 220.399032ms 221.358522ms 225.680988ms 227.754801ms 228.123242ms 229.128431ms 232.664247ms 233.703192ms 234.470581ms 243.75688ms 244.983081ms 246.859139ms 249.048769ms 291.005163ms 301.118791ms 335.589869ms 341.712798ms 359.404205ms 364.00189ms 368.782432ms 384.568784ms 399.597037ms 400.291675ms 411.510358ms 413.321652ms 415.882556ms 429.065595ms 437.197904ms 442.368722ms 444.304823ms 447.918876ms 452.727736ms 455.569933ms 458.848668ms 468.285302ms 468.598673ms 468.870344ms 474.119745ms 476.803195ms 477.688403ms 478.742804ms 509.528499ms 509.932855ms 515.106265ms 529.827111ms 530.544119ms 542.422516ms 551.824209ms 556.940211ms 569.437436ms 579.697338ms 580.944089ms 585.798642ms 586.070605ms 587.632805ms 600.730479ms 621.335993ms 623.309132ms 629.386506ms 660.008414ms 697.773957ms 711.891755ms 719.07895ms 729.725954ms 731.340726ms 742.24922ms 742.849293ms 742.929325ms 743.436843ms 743.865086ms 744.168807ms 744.374733ms 744.833866ms 745.140643ms 745.577072ms 745.583478ms 746.471955ms 746.699888ms 747.039477ms 747.112491ms 747.11967ms 747.193065ms 747.275145ms 747.375542ms 747.485297ms 747.587667ms 748.107959ms 748.206058ms 748.281787ms 748.376921ms 748.436073ms 748.466688ms 748.531921ms 748.675129ms 748.67531ms 748.748708ms 748.912363ms 748.97976ms 749.034542ms 749.081659ms 749.095236ms 749.199898ms 749.386679ms 749.447996ms 749.505474ms 749.799556ms 749.899827ms 750.003753ms 750.093709ms 750.111081ms 750.166724ms 750.176958ms 750.240603ms 750.30178ms 750.324697ms 750.374941ms 750.427468ms 750.440802ms 750.571054ms 750.694177ms 750.889582ms 751.242023ms 751.449793ms 751.49741ms 751.50614ms 751.798533ms 751.926947ms 752.083713ms 752.101529ms 752.322731ms 752.489104ms 752.655444ms 752.666936ms 752.973575ms 753.052089ms 753.055911ms 753.066423ms 753.128438ms 753.133708ms 753.29612ms 753.48592ms 753.515387ms 753.938097ms 754.075201ms 754.622378ms 754.656453ms 754.785044ms 755.062371ms 755.36535ms 756.44491ms 757.166215ms 758.192225ms 760.017659ms 764.432448ms 766.361357ms 792.896971ms 3.769386029s 3.824593452s 3.840242955s 3.872446833s 3.894402897s 3.922290611s 3.945074828s 3.974016866s 3.990127687s 4.046211383s 4.090034619s 4.142708028s 4.193030505s 4.238827296s 4.283899206s] +Apr 29 19:36:38.112: INFO: 50 %ile: 744.374733ms +Apr 29 19:36:38.113: INFO: 90 %ile: 758.192225ms +Apr 29 19:36:38.113: INFO: 99 %ile: 4.238827296s +Apr 29 19:36:38.113: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:36:38.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-8870" for this suite. + +• [SLOW TEST:13.243 seconds] +[sig-network] Service endpoints latency +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":346,"completed":259,"skipped":4535,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:36:38.138: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-9530 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-9530 +STEP: creating replication controller externalsvc in namespace services-9530 +I0429 19:36:38.237562 25 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9530, replica count: 2 +I0429 19:36:41.288968 25 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Apr 29 19:36:41.317: INFO: Creating new exec pod +Apr 29 19:36:43.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-9530 exec execpod9xsl4 -- /bin/sh -x -c nslookup nodeport-service.services-9530.svc.cluster.local' +Apr 29 19:36:43.596: INFO: stderr: "+ nslookup nodeport-service.services-9530.svc.cluster.local\n" +Apr 29 19:36:43.596: INFO: stdout: "Server:\t\t100.64.0.10\nAddress:\t100.64.0.10#53\n\nnodeport-service.services-9530.svc.cluster.local\tcanonical name = externalsvc.services-9530.svc.cluster.local.\nName:\texternalsvc.services-9530.svc.cluster.local\nAddress: 100.64.76.155\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-9530, will wait for the garbage collector to delete the pods +Apr 29 19:36:43.661: INFO: Deleting ReplicationController externalsvc took: 9.296082ms +Apr 29 19:36:45.061: INFO: Terminating ReplicationController externalsvc pods took: 1.400455276s +Apr 29 19:36:47.580: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:36:47.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9530" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:9.469 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":260,"skipped":4552,"failed":0} +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:36:47.608: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name s-test-opt-del-1047513f-0813-45df-8bef-13f8a1963e3e +STEP: Creating secret with name s-test-opt-upd-6a70ef16-a176-4e5c-87e0-b47db425f316 +STEP: Creating the pod +Apr 29 19:36:47.703: INFO: The status of Pod pod-projected-secrets-b67cd0a1-615d-40fe-965c-8638a34dc600 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:36:49.721: INFO: The status of Pod pod-projected-secrets-b67cd0a1-615d-40fe-965c-8638a34dc600 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-1047513f-0813-45df-8bef-13f8a1963e3e +STEP: Updating secret s-test-opt-upd-6a70ef16-a176-4e5c-87e0-b47db425f316 +STEP: Creating secret with name s-test-opt-create-92d57c28-5b09-4d0c-b40a-69fb6df4aa10 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:36:51.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6989" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":261,"skipped":4552,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:36:51.879: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:36:51.935: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24b34025-98a1-4b12-a5fd-cf23eb52065e" in namespace "downward-api-5644" to be "Succeeded or Failed" +Apr 29 19:36:51.944: INFO: Pod "downwardapi-volume-24b34025-98a1-4b12-a5fd-cf23eb52065e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.444894ms +Apr 29 19:36:53.950: INFO: Pod "downwardapi-volume-24b34025-98a1-4b12-a5fd-cf23eb52065e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014736485s +Apr 29 19:36:55.956: INFO: Pod "downwardapi-volume-24b34025-98a1-4b12-a5fd-cf23eb52065e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02031103s +STEP: Saw pod success +Apr 29 19:36:55.956: INFO: Pod "downwardapi-volume-24b34025-98a1-4b12-a5fd-cf23eb52065e" satisfied condition "Succeeded or Failed" +Apr 29 19:36:55.962: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-24b34025-98a1-4b12-a5fd-cf23eb52065e container client-container: +STEP: delete the pod +Apr 29 19:36:55.983: INFO: Waiting for pod downwardapi-volume-24b34025-98a1-4b12-a5fd-cf23eb52065e to disappear +Apr 29 19:36:55.987: INFO: Pod downwardapi-volume-24b34025-98a1-4b12-a5fd-cf23eb52065e no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:36:55.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5644" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":262,"skipped":4566,"failed":0} +SSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:36:56.005: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting the auto-created API token +Apr 29 19:36:56.579: INFO: created pod pod-service-account-defaultsa +Apr 29 19:36:56.579: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Apr 29 19:36:56.585: INFO: created pod pod-service-account-mountsa +Apr 29 19:36:56.585: INFO: pod pod-service-account-mountsa service account token volume mount: true +Apr 29 19:36:56.591: INFO: created pod pod-service-account-nomountsa +Apr 29 19:36:56.591: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Apr 29 19:36:56.601: INFO: created pod pod-service-account-defaultsa-mountspec +Apr 29 19:36:56.601: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Apr 29 19:36:56.609: INFO: created pod pod-service-account-mountsa-mountspec +Apr 29 19:36:56.609: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Apr 29 19:36:56.618: INFO: created pod pod-service-account-nomountsa-mountspec +Apr 29 19:36:56.618: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Apr 29 19:36:56.627: INFO: created pod pod-service-account-defaultsa-nomountspec +Apr 29 19:36:56.627: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Apr 29 19:36:56.632: INFO: created pod pod-service-account-mountsa-nomountspec +Apr 29 19:36:56.632: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Apr 29 19:36:56.639: INFO: created pod pod-service-account-nomountsa-nomountspec +Apr 29 19:36:56.639: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:36:56.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-8862" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":346,"completed":263,"skipped":4573,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:36:56.668: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8844.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8844.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8844.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8844.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8844.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8844.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Apr 29 19:37:04.775: INFO: DNS probes using dns-8844/dns-test-ccbc85fd-e0e8-44cf-9b24-23403f239f0f succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:37:04.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8844" for this suite. + +• [SLOW TEST:8.137 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":264,"skipped":4581,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:37:04.806: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:37:04.851: INFO: Creating deployment "test-recreate-deployment" +Apr 29 19:37:04.861: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Apr 29 19:37:04.870: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Apr 29 19:37:06.881: INFO: Waiting deployment "test-recreate-deployment" to complete +Apr 29 19:37:06.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857824, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857824, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857824, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857824, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 19:37:08.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857824, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857824, loc:(*time.Location)(0xa0a1d40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857824, loc:(*time.Location)(0xa0a1d40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786857824, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} +Apr 29 19:37:10.891: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Apr 29 19:37:10.901: INFO: Updating deployment test-recreate-deployment +Apr 29 19:37:10.901: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Apr 29 19:37:10.972: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-2813 40218dcf-a6f9-4296-8c4a-5a86f61d165b 760434 2 2022-04-29 19:37:04 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-29 19:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0061eae18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-29 19:37:10 +0000 UTC,LastTransitionTime:2022-04-29 19:37:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2022-04-29 19:37:10 +0000 UTC,LastTransitionTime:2022-04-29 19:37:04 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Apr 29 19:37:10.976: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-2813 42e30f7a-bd16-4b55-a697-0108500f8f86 760431 1 2022-04-29 19:37:10 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 40218dcf-a6f9-4296-8c4a-5a86f61d165b 0xc0061eb2f0 0xc0061eb2f1}] [] [{kube-controller-manager Update apps/v1 2022-04-29 19:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40218dcf-a6f9-4296-8c4a-5a86f61d165b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:37:10 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0061eb388 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Apr 29 19:37:10.977: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Apr 29 19:37:10.977: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-2813 25882ab9-bcb2-4b32-8680-f5747d6a6166 760423 2 2022-04-29 19:37:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 40218dcf-a6f9-4296-8c4a-5a86f61d165b 0xc0061eb1d7 0xc0061eb1d8}] [] [{kube-controller-manager Update apps/v1 2022-04-29 19:37:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"40218dcf-a6f9-4296-8c4a-5a86f61d165b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-29 19:37:10 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0061eb288 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Apr 29 19:37:10.984: INFO: Pod "test-recreate-deployment-85d47dcb4-tsj7q" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-tsj7q test-recreate-deployment-85d47dcb4- deployment-2813 83105c40-309e-4455-9c1a-5eee12a2d142 760429 0 2022-04-29 19:37:10 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 42e30f7a-bd16-4b55-a697-0108500f8f86 0xc0061eb800 0xc0061eb801}] [] [{kube-controller-manager Update v1 2022-04-29 19:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42e30f7a-bd16-4b55-a697-0108500f8f86\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dtjrz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dtjrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:37:10.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2813" for this suite. + +• [SLOW TEST:6.190 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":265,"skipped":4588,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:37:10.998: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Apr 29 19:37:11.076: INFO: Number of nodes with available pods: 0 +Apr 29 19:37:11.076: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:37:12.093: INFO: Number of nodes with available pods: 0 +Apr 29 19:37:12.093: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:37:13.088: INFO: Number of nodes with available pods: 1 +Apr 29 19:37:13.088: INFO: Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc is running more than one daemon pod +Apr 29 19:37:14.089: INFO: Number of nodes with available pods: 2 +Apr 29 19:37:14.089: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: listing all DeamonSets +STEP: DeleteCollection of the DaemonSets +STEP: Verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +Apr 29 19:37:14.129: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"760494"},"items":null} + +Apr 29 19:37:14.138: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"760494"},"items":[{"metadata":{"name":"daemon-set-pvpgd","generateName":"daemon-set-","namespace":"daemonsets-7690","uid":"3d4d3ce4-f7bc-444f-adfc-a747144d6f9c","resourceVersion":"760490","creationTimestamp":"2022-04-29T19:37:11Z","labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"c0b86901-cb29-47d0-9401-2a168e8c2167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-04-29T19:37:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b86901-cb29-47d0-9401-2a168e8c2167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-04-29T19:37:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.135\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-qdtxp","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-qdtxp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"tkg-mgmt-vc-md-0-59d8b7c778-msxpc","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["tkg-mgmt-vc-md-0-59d8b7c778-msxpc"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T19:37:11Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T19:37:13Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T19:37:13Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T19:37:11Z"}],"hostIP":"10.180.99.66","podIP":"100.96.1.135","podIPs":[{"ip":"100.96.1.135"}],"startTime":"2022-04-29T19:37:11Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-04-29T19:37:12Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"containerd://3c38bf97ac51276b2bbf4a0df8e243da005089b402db6d11c367f944dfd6de27","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-zhvs6","generateName":"daemon-set-","namespace":"daemonsets-7690","uid":"1dc1b2be-76c7-4737-8e50-ad71f46d13e4","resourceVersion":"760478","creationTimestamp":"2022-04-29T19:37:11Z","labels":{"controller-revision-hash":"577749b6b","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"c0b86901-cb29-47d0-9401-2a168e8c2167","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-04-29T19:37:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0b86901-cb29-47d0-9401-2a168e8c2167\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-04-29T19:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.0.154\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-f5gs7","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-f5gs7","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"tkg-mgmt-vc-control-plane-4czbf","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["tkg-mgmt-vc-control-plane-4czbf"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T19:37:11Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T19:37:12Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T19:37:12Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T19:37:11Z"}],"hostIP":"10.180.111.35","podIP":"100.96.0.154","podIPs":[{"ip":"100.96.0.154"}],"startTime":"2022-04-29T19:37:11Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-04-29T19:37:12Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50","containerID":"containerd://93e9e177ff418438272fcfa92033cea0e3aaf2e6c87924bee0bc88b56f79ae95","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:37:14.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-7690" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":266,"skipped":4600,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:37:14.172: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:37:14.226: INFO: The status of Pod busybox-readonly-fs101c15fc-1419-41ff-98b9-0d9a783882d4 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:37:16.231: INFO: The status of Pod busybox-readonly-fs101c15fc-1419-41ff-98b9-0d9a783882d4 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:37:16.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-7496" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":267,"skipped":4625,"failed":0} +SSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:37:16.258: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Apr 29 19:37:16.295: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:37:21.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-9103" for this suite. + +• [SLOW TEST:5.603 seconds] +[sig-node] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":268,"skipped":4628,"failed":0} +SS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:37:21.861: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:37:21.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-6462" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":269,"skipped":4630,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:37:21.941: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Apr 29 19:37:21.979: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Apr 29 19:37:50.323: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:37:57.869: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:38:26.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4631" for this suite. + +• [SLOW TEST:64.908 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":270,"skipped":4718,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:38:26.850: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service nodeport-test with type=NodePort in namespace services-6531 +STEP: creating replication controller nodeport-test in namespace services-6531 +I0429 19:38:26.924002 25 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6531, replica count: 2 +I0429 19:38:29.976320 25 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 19:38:29.976: INFO: Creating new exec pod +Apr 29 19:38:35.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6531 exec execpod6hdjq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Apr 29 19:38:35.264: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Apr 29 19:38:35.264: INFO: stdout: "" +Apr 29 19:38:36.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6531 exec execpod6hdjq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Apr 29 19:38:36.496: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Apr 29 19:38:36.496: INFO: stdout: "nodeport-test-556q9" +Apr 29 19:38:36.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6531 exec execpod6hdjq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.69.172.14 80' +Apr 29 19:38:36.689: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.69.172.14 80\nConnection to 100.69.172.14 80 port [tcp/http] succeeded!\n" +Apr 29 19:38:36.689: INFO: stdout: "nodeport-test-556q9" +Apr 29 19:38:36.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6531 exec execpod6hdjq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.111.35 31970' +Apr 29 19:38:36.855: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.111.35 31970\nConnection to 10.180.111.35 31970 port [tcp/*] succeeded!\n" +Apr 29 19:38:36.855: INFO: stdout: "" +Apr 29 19:38:37.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6531 exec execpod6hdjq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.111.35 31970' +Apr 29 19:38:38.031: INFO: stderr: "+ + ncecho -v -t hostName -w 2\n 10.180.111.35 31970\nConnection to 10.180.111.35 31970 port [tcp/*] succeeded!\n" +Apr 29 19:38:38.031: INFO: stdout: "nodeport-test-556q9" +Apr 29 19:38:38.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6531 exec execpod6hdjq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.99.66 31970' +Apr 29 19:38:38.215: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.99.66 31970\nConnection to 10.180.99.66 31970 port [tcp/*] succeeded!\n" +Apr 29 19:38:38.216: INFO: stdout: "nodeport-test-556q9" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:38:38.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6531" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:11.379 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":271,"skipped":4740,"failed":0} +SSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:38:38.229: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-downwardapi-jgbh +STEP: Creating a pod to test atomic-volume-subpath +Apr 29 19:38:38.297: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jgbh" in namespace "subpath-9707" to be "Succeeded or Failed" +Apr 29 19:38:38.301: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Pending", Reason="", readiness=false. Elapsed: 3.703493ms +Apr 29 19:38:40.306: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 2.009283153s +Apr 29 19:38:42.313: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 4.015706743s +Apr 29 19:38:44.319: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 6.021930214s +Apr 29 19:38:46.325: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 8.02795558s +Apr 29 19:38:48.331: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 10.034073366s +Apr 29 19:38:50.337: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 12.040166586s +Apr 29 19:38:52.368: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 14.070456058s +Apr 29 19:38:54.374: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 16.077263525s +Apr 29 19:38:56.383: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 18.08606917s +Apr 29 19:38:58.391: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Running", Reason="", readiness=true. Elapsed: 20.093879911s +Apr 29 19:39:00.402: INFO: Pod "pod-subpath-test-downwardapi-jgbh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.104812991s +STEP: Saw pod success +Apr 29 19:39:00.402: INFO: Pod "pod-subpath-test-downwardapi-jgbh" satisfied condition "Succeeded or Failed" +Apr 29 19:39:00.408: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-subpath-test-downwardapi-jgbh container test-container-subpath-downwardapi-jgbh: +STEP: delete the pod +Apr 29 19:39:00.459: INFO: Waiting for pod pod-subpath-test-downwardapi-jgbh to disappear +Apr 29 19:39:00.472: INFO: Pod pod-subpath-test-downwardapi-jgbh no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-jgbh +Apr 29 19:39:00.472: INFO: Deleting pod "pod-subpath-test-downwardapi-jgbh" in namespace "subpath-9707" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:39:00.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-9707" for this suite. + +• [SLOW TEST:22.270 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":346,"completed":272,"skipped":4743,"failed":0} +SS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:39:00.500: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:39:00.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ecf042e-39a8-4b84-89eb-71f7dfe3266f" in namespace "projected-8691" to be "Succeeded or Failed" +Apr 29 19:39:00.573: INFO: Pod "downwardapi-volume-2ecf042e-39a8-4b84-89eb-71f7dfe3266f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.230736ms +Apr 29 19:39:02.581: INFO: Pod "downwardapi-volume-2ecf042e-39a8-4b84-89eb-71f7dfe3266f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012675158s +Apr 29 19:39:04.595: INFO: Pod "downwardapi-volume-2ecf042e-39a8-4b84-89eb-71f7dfe3266f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026802122s +STEP: Saw pod success +Apr 29 19:39:04.595: INFO: Pod "downwardapi-volume-2ecf042e-39a8-4b84-89eb-71f7dfe3266f" satisfied condition "Succeeded or Failed" +Apr 29 19:39:04.601: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-2ecf042e-39a8-4b84-89eb-71f7dfe3266f container client-container: +STEP: delete the pod +Apr 29 19:39:04.635: INFO: Waiting for pod downwardapi-volume-2ecf042e-39a8-4b84-89eb-71f7dfe3266f to disappear +Apr 29 19:39:04.639: INFO: Pod downwardapi-volume-2ecf042e-39a8-4b84-89eb-71f7dfe3266f no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:39:04.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8691" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":273,"skipped":4745,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:39:04.658: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Apr 29 19:39:08.730: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9899 PodName:pod-sharedvolume-5b69a808-81e3-421c-afaa-19dc6f9a4dcc ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:39:08.730: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:39:08.839: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:39:08.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9899" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":274,"skipped":4780,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:39:08.855: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Apr 29 19:39:10.933: INFO: &Pod{ObjectMeta:{send-events-e996a778-2069-4fa7-85ed-c257c1da4c15 events-4812 7f825517-a76a-4026-8259-5d90f5855b9a 761637 0 2022-04-29 19:39:08 +0000 UTC map[name:foo time:899886979] map[] [] [] [{e2e.test Update v1 2022-04-29 19:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-29 19:39:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.1.143\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bjsrj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjsrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:tkg-mgmt-vc-md-0-59d8b7c778-msxpc,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:39:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:39:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:39:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-29 19:39:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.180.99.66,PodIP:100.96.1.143,StartTime:2022-04-29 19:39:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-29 19:39:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://758526827ef04c0913011a6817ac96d6780ce542f5f105e3803d1679b3875769,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.1.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +STEP: checking for scheduler event about the pod +Apr 29 19:39:12.941: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Apr 29 19:39:14.948: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [sig-node] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:39:14.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-4812" for this suite. + +• [SLOW TEST:6.112 seconds] +[sig-node] Events +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":346,"completed":275,"skipped":4840,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:39:14.968: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:39:15.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f21bf3d-c056-4cdb-80e2-c21ec146301c" in namespace "downward-api-9961" to be "Succeeded or Failed" +Apr 29 19:39:15.025: INFO: Pod "downwardapi-volume-0f21bf3d-c056-4cdb-80e2-c21ec146301c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010862ms +Apr 29 19:39:17.031: INFO: Pod "downwardapi-volume-0f21bf3d-c056-4cdb-80e2-c21ec146301c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01011681s +Apr 29 19:39:19.037: INFO: Pod "downwardapi-volume-0f21bf3d-c056-4cdb-80e2-c21ec146301c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016689849s +STEP: Saw pod success +Apr 29 19:39:19.038: INFO: Pod "downwardapi-volume-0f21bf3d-c056-4cdb-80e2-c21ec146301c" satisfied condition "Succeeded or Failed" +Apr 29 19:39:19.041: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-0f21bf3d-c056-4cdb-80e2-c21ec146301c container client-container: +STEP: delete the pod +Apr 29 19:39:19.059: INFO: Waiting for pod downwardapi-volume-0f21bf3d-c056-4cdb-80e2-c21ec146301c to disappear +Apr 29 19:39:19.064: INFO: Pod downwardapi-volume-0f21bf3d-c056-4cdb-80e2-c21ec146301c no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:39:19.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9961" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":276,"skipped":4912,"failed":0} +SSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:39:19.088: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Apr 29 19:39:19.144: INFO: Waiting up to 1m0s for all nodes to be ready +Apr 29 19:40:19.210: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:40:19.216: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Apr 29 19:40:23.295: INFO: found a healthy node: tkg-mgmt-vc-md-0-59d8b7c778-msxpc +[It] runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:40:35.410: INFO: pods created so far: [1 1 1] +Apr 29 19:40:35.410: INFO: length of pods created so far: 3 +Apr 29 19:40:37.422: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:40:44.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-6094" for this suite. +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:40:44.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-8642" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:85.450 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":277,"skipped":4919,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:40:44.540: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:41:00.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3397" for this suite. + +• [SLOW TEST:16.159 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":278,"skipped":4941,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:41:00.702: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-0ef85e30-d2ed-484f-8e94-fa87ecd3ab00 +STEP: Creating a pod to test consume configMaps +Apr 29 19:41:00.750: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb00264d-71ee-4094-96e1-4c03dcd418f1" in namespace "configmap-4921" to be "Succeeded or Failed" +Apr 29 19:41:00.753: INFO: Pod "pod-configmaps-cb00264d-71ee-4094-96e1-4c03dcd418f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.156738ms +Apr 29 19:41:02.762: INFO: Pod "pod-configmaps-cb00264d-71ee-4094-96e1-4c03dcd418f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01173404s +STEP: Saw pod success +Apr 29 19:41:02.762: INFO: Pod "pod-configmaps-cb00264d-71ee-4094-96e1-4c03dcd418f1" satisfied condition "Succeeded or Failed" +Apr 29 19:41:02.767: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-cb00264d-71ee-4094-96e1-4c03dcd418f1 container agnhost-container: +STEP: delete the pod +Apr 29 19:41:02.794: INFO: Waiting for pod pod-configmaps-cb00264d-71ee-4094-96e1-4c03dcd418f1 to disappear +Apr 29 19:41:02.798: INFO: Pod pod-configmaps-cb00264d-71ee-4094-96e1-4c03dcd418f1 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:41:02.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4921" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":279,"skipped":4976,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:41:02.811: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Apr 29 19:41:02.851: INFO: Waiting up to 5m0s for pod "downward-api-ca109d97-1c58-4d42-8e02-5b16aba790c4" in namespace "downward-api-3034" to be "Succeeded or Failed" +Apr 29 19:41:02.855: INFO: Pod "downward-api-ca109d97-1c58-4d42-8e02-5b16aba790c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200676ms +Apr 29 19:41:04.864: INFO: Pod "downward-api-ca109d97-1c58-4d42-8e02-5b16aba790c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012593461s +STEP: Saw pod success +Apr 29 19:41:04.864: INFO: Pod "downward-api-ca109d97-1c58-4d42-8e02-5b16aba790c4" satisfied condition "Succeeded or Failed" +Apr 29 19:41:04.868: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downward-api-ca109d97-1c58-4d42-8e02-5b16aba790c4 container dapi-container: +STEP: delete the pod +Apr 29 19:41:04.885: INFO: Waiting for pod downward-api-ca109d97-1c58-4d42-8e02-5b16aba790c4 to disappear +Apr 29 19:41:04.888: INFO: Pod downward-api-ca109d97-1c58-4d42-8e02-5b16aba790c4 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:41:04.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3034" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":280,"skipped":4987,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:41:04.908: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: expected 0 pods, got 2 pods +STEP: Gathering metrics +Apr 29 19:41:06.025: INFO: The status of Pod kube-controller-manager-tkg-mgmt-vc-control-plane-4czbf is Running (Ready = true) +Apr 29 19:41:06.355: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:41:06.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-4028" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":281,"skipped":5007,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:41:06.372: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename lease-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[AfterEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:41:06.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-6844" for this suite. +•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":282,"skipped":5028,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:41:06.476: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:41:06.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06caba83-728e-4515-97a4-433f0295720e" in namespace "projected-7431" to be "Succeeded or Failed" +Apr 29 19:41:06.532: INFO: Pod "downwardapi-volume-06caba83-728e-4515-97a4-433f0295720e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232505ms +Apr 29 19:41:08.537: INFO: Pod "downwardapi-volume-06caba83-728e-4515-97a4-433f0295720e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012040468s +Apr 29 19:41:10.543: INFO: Pod "downwardapi-volume-06caba83-728e-4515-97a4-433f0295720e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017423382s +STEP: Saw pod success +Apr 29 19:41:10.543: INFO: Pod "downwardapi-volume-06caba83-728e-4515-97a4-433f0295720e" satisfied condition "Succeeded or Failed" +Apr 29 19:41:10.547: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-06caba83-728e-4515-97a4-433f0295720e container client-container: +STEP: delete the pod +Apr 29 19:41:10.566: INFO: Waiting for pod downwardapi-volume-06caba83-728e-4515-97a4-433f0295720e to disappear +Apr 29 19:41:10.569: INFO: Pod downwardapi-volume-06caba83-728e-4515-97a4-433f0295720e no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:41:10.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7431" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":283,"skipped":5046,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:41:10.579: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9936.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9936.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9936.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9936.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9936.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9936.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9936.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9936.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9936.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9936.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.229.66.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.66.229.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.229.66.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.66.229.32_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9936.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9936.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9936.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9936.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9936.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9936.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9936.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9936.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9936.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9936.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9936.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 32.229.66.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.66.229.32_udp@PTR;check="$$(dig +tcp +noall +answer +search 32.229.66.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.66.229.32_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Apr 29 19:41:12.684: INFO: Unable to read wheezy_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:12.690: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:12.696: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:12.701: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:12.739: INFO: Unable to read jessie_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:12.746: INFO: Unable to read jessie_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:12.751: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:12.757: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:12.789: INFO: Lookups using dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8 failed for: [wheezy_udp@dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_udp@dns-test-service.dns-9936.svc.cluster.local jessie_tcp@dns-test-service.dns-9936.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local] + +Apr 29 19:41:17.798: INFO: Unable to read wheezy_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:17.804: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:17.809: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:17.814: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:17.853: INFO: Unable to read jessie_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:17.859: INFO: Unable to read jessie_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:17.865: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:17.871: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:17.902: INFO: Lookups using dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8 failed for: [wheezy_udp@dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_udp@dns-test-service.dns-9936.svc.cluster.local jessie_tcp@dns-test-service.dns-9936.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local] + +Apr 29 19:41:22.798: INFO: Unable to read wheezy_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:22.804: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:22.809: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:22.814: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:22.856: INFO: Unable to read jessie_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:22.865: INFO: Unable to read jessie_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:22.871: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:22.877: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:22.915: INFO: Lookups using dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8 failed for: [wheezy_udp@dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_udp@dns-test-service.dns-9936.svc.cluster.local jessie_tcp@dns-test-service.dns-9936.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local] + +Apr 29 19:41:27.796: INFO: Unable to read wheezy_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:27.802: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:27.808: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:27.813: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:27.852: INFO: Unable to read jessie_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:27.856: INFO: Unable to read jessie_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:27.861: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:27.865: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:27.896: INFO: Lookups using dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8 failed for: [wheezy_udp@dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_udp@dns-test-service.dns-9936.svc.cluster.local jessie_tcp@dns-test-service.dns-9936.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local] + +Apr 29 19:41:32.795: INFO: Unable to read wheezy_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:32.799: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:32.804: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:32.809: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:32.843: INFO: Unable to read jessie_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:32.847: INFO: Unable to read jessie_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:32.852: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:32.856: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:32.882: INFO: Lookups using dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8 failed for: [wheezy_udp@dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_udp@dns-test-service.dns-9936.svc.cluster.local jessie_tcp@dns-test-service.dns-9936.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local] + +Apr 29 19:41:37.797: INFO: Unable to read wheezy_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:37.802: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:37.809: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:37.815: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:37.855: INFO: Unable to read jessie_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:37.860: INFO: Unable to read jessie_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:37.868: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:37.873: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:37.904: INFO: Lookups using dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8 failed for: [wheezy_udp@dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_udp@dns-test-service.dns-9936.svc.cluster.local jessie_tcp@dns-test-service.dns-9936.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local] + +Apr 29 19:41:42.796: INFO: Unable to read wheezy_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:42.801: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:42.807: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:42.812: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:42.849: INFO: Unable to read jessie_udp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:42.854: INFO: Unable to read jessie_tcp@dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:42.860: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:42.865: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local from pod dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8: the server could not find the requested resource (get pods dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8) +Apr 29 19:41:42.896: INFO: Lookups using dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8 failed for: [wheezy_udp@dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@dns-test-service.dns-9936.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_udp@dns-test-service.dns-9936.svc.cluster.local jessie_tcp@dns-test-service.dns-9936.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9936.svc.cluster.local] + +Apr 29 19:41:47.897: INFO: DNS probes using dns-9936/dns-test-971726c5-e2db-40f9-9461-2ae3a91e4fe8 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:41:47.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9936" for this suite. + +• [SLOW TEST:37.394 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":346,"completed":284,"skipped":5055,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:41:47.973: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: validating api versions +Apr 29 19:41:48.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7050 api-versions' +Apr 29 19:41:48.099: INFO: stderr: "" +Apr 29 19:41:48.099: INFO: stdout: "acme.cert-manager.io/v1\nacme.cert-manager.io/v1alpha2\nacme.cert-manager.io/v1alpha3\nacme.cert-manager.io/v1beta1\naddons.cluster.x-k8s.io/v1alpha3\naddons.cluster.x-k8s.io/v1alpha4\naddons.cluster.x-k8s.io/v1beta1\nadmissionregistration.k8s.io/v1\nako.vmware.com/v1alpha1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\nbootstrap.cluster.x-k8s.io/v1alpha3\nbootstrap.cluster.x-k8s.io/v1alpha4\nbootstrap.cluster.x-k8s.io/v1beta1\ncert-manager.io/v1\ncert-manager.io/v1alpha2\ncert-manager.io/v1alpha3\ncert-manager.io/v1beta1\ncertificates.k8s.io/v1\ncli.tanzu.vmware.com/v1alpha1\ncluster.x-k8s.io/v1alpha3\ncluster.x-k8s.io/v1alpha4\ncluster.x-k8s.io/v1beta1\nclusterctl.cluster.x-k8s.io/v1alpha3\nclusterinformation.antrea.tanzu.vmware.com/v1beta1\ncns.vmware.com/v1alpha1\nconfig.tanzu.vmware.com/v1alpha1\ncontrolplane.antrea.io/v1beta2\ncontrolplane.antrea.tanzu.vmware.com/v1beta1\ncontrolplane.antrea.tanzu.vmware.com/v1beta2\ncontrolplane.cluster.x-k8s.io/v1alpha3\ncontrolplane.cluster.x-k8s.io/v1alpha4\ncontrolplane.cluster.x-k8s.io/v1beta1\ncoordination.k8s.io/v1\ncore.antrea.tanzu.vmware.com/v1alpha2\ncrd.antrea.io/v1alpha1\ncrd.antrea.io/v1alpha2\ncrd.antrea.io/v1alpha3\ncrd.antrea.io/v1beta1\ncrd.antrea.tanzu.vmware.com/v1alpha1\ndata.packaging.carvel.dev/v1alpha1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\ninfrastructure.cluster.x-k8s.io/v1alpha3\ninfrastructure.cluster.x-k8s.io/v1alpha4\ninfrastructure.cluster.x-k8s.io/v1beta1\ninternal.packaging.carvel.dev/v1alpha1\nkappctrl.k14s.io/v1alpha1\nmetrics.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.tkg.tanzu.vmware.com/v1alpha1\nnetworking.x-k8s.io/v1alpha1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\nops.antrea.tanzu.vmware.com/v1alpha1\npackaging.carvel.dev/v1alpha1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrun.tanzu.vmware.com/v1alpha1\nscheduling.k8s.io/v1\nsecretgen.carvel.dev/v1alpha1\nsecretgen.k14s.io/v1alpha1\nsecurity.antrea.tanzu.vmware.com/v1alpha1\nstats.antrea.io/v1alpha1\nstats.antrea.tanzu.vmware.com/v1alpha1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nsystem.antrea.io/v1beta1\nsystem.antrea.tanzu.vmware.com/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:41:48.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7050" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":346,"completed":285,"skipped":5097,"failed":0} +SSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:41:48.111: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:41:48.156: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:41:50.162: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:41:52.161: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:41:54.160: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:41:56.161: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:41:58.163: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:42:00.161: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:42:02.161: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:42:04.164: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:42:06.165: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:42:08.164: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = false) +Apr 29 19:42:10.161: INFO: The status of Pod test-webserver-589a5884-bd43-491f-b746-5b76503ab84c is Running (Ready = true) +Apr 29 19:42:10.166: INFO: Container started at 2022-04-29 19:41:49 +0000 UTC, pod became ready at 2022-04-29 19:42:08 +0000 UTC +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:10.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-4586" for this suite. + +• [SLOW TEST:22.069 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":286,"skipped":5101,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:10.181: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pdb that targets all three pods in a test replica set +STEP: Waiting for the pdb to be processed +STEP: First trying to evict a pod which shouldn't be evictable +STEP: Waiting for all pods to be running +Apr 29 19:42:12.248: INFO: pods: 0 < 3 +STEP: locating a running pod +STEP: Updating the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +STEP: Waiting for the pdb to observed all healthy pods +STEP: Patching the pdb to disallow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +STEP: locating a running pod +STEP: Deleting the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be deleted +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:18.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2921" for this suite. + +• [SLOW TEST:8.244 seconds] +[sig-apps] DisruptionController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":287,"skipped":5120,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:18.429: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename hostport +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled +Apr 29 19:42:18.506: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:42:20.512: INFO: The status of Pod pod1 is Running (Ready = false) +Apr 29 19:42:22.512: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.180.111.35 on the node which pod1 resides and expect scheduled +Apr 29 19:42:22.526: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:42:24.531: INFO: The status of Pod pod2 is Running (Ready = false) +Apr 29 19:42:26.531: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.180.111.35 but use UDP protocol on the node which pod2 resides +Apr 29 19:42:26.545: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:42:28.552: INFO: The status of Pod pod3 is Running (Ready = false) +Apr 29 19:42:30.550: INFO: The status of Pod pod3 is Running (Ready = true) +Apr 29 19:42:30.560: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:42:32.567: INFO: The status of Pod e2e-host-exec is Running (Ready = true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Apr 29 19:42:32.571: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.180.111.35 http://127.0.0.1:54323/hostname] Namespace:hostport-5291 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:42:32.571: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.180.111.35, port: 54323 +Apr 29 19:42:32.929: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.180.111.35:54323/hostname] Namespace:hostport-5291 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:42:32.929: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.180.111.35, port: 54323 UDP +Apr 29 19:42:33.059: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.180.111.35 54323] Namespace:hostport-5291 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:42:33.059: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[AfterEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:38.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-5291" for this suite. + +• [SLOW TEST:19.774 seconds] +[sig-network] HostPort +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":288,"skipped":5150,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:38.204: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename runtimeclass +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Apr 29 19:42:38.285: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Apr 29 19:42:38.327: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:38.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-6447" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":346,"completed":289,"skipped":5186,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:38.390: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename prestop +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 +[It] should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating server pod server in namespace prestop-1374 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-1374 +STEP: Deleting pre-stop pod +Apr 29 19:42:47.531: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:47.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-1374" for this suite. + +• [SLOW TEST:9.169 seconds] +[sig-node] PreStop +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":346,"completed":290,"skipped":5224,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:47.559: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:42:47.598: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:48.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-7598" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":346,"completed":291,"skipped":5233,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:48.169: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:42:48.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4459ae26-00c3-4657-8308-7b2565f8820f" in namespace "projected-7213" to be "Succeeded or Failed" +Apr 29 19:42:48.258: INFO: Pod "downwardapi-volume-4459ae26-00c3-4657-8308-7b2565f8820f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.686484ms +Apr 29 19:42:50.263: INFO: Pod "downwardapi-volume-4459ae26-00c3-4657-8308-7b2565f8820f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009193856s +STEP: Saw pod success +Apr 29 19:42:50.263: INFO: Pod "downwardapi-volume-4459ae26-00c3-4657-8308-7b2565f8820f" satisfied condition "Succeeded or Failed" +Apr 29 19:42:50.268: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-4459ae26-00c3-4657-8308-7b2565f8820f container client-container: +STEP: delete the pod +Apr 29 19:42:50.315: INFO: Waiting for pod downwardapi-volume-4459ae26-00c3-4657-8308-7b2565f8820f to disappear +Apr 29 19:42:50.324: INFO: Pod downwardapi-volume-4459ae26-00c3-4657-8308-7b2565f8820f no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:50.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7213" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":292,"skipped":5238,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:50.337: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0666 on node default medium +Apr 29 19:42:50.390: INFO: Waiting up to 5m0s for pod "pod-f4121df3-1e27-447f-a094-8f683e007d5d" in namespace "emptydir-5163" to be "Succeeded or Failed" +Apr 29 19:42:50.396: INFO: Pod "pod-f4121df3-1e27-447f-a094-8f683e007d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055163ms +Apr 29 19:42:52.404: INFO: Pod "pod-f4121df3-1e27-447f-a094-8f683e007d5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013552725s +STEP: Saw pod success +Apr 29 19:42:52.404: INFO: Pod "pod-f4121df3-1e27-447f-a094-8f683e007d5d" satisfied condition "Succeeded or Failed" +Apr 29 19:42:52.409: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-f4121df3-1e27-447f-a094-8f683e007d5d container test-container: +STEP: delete the pod +Apr 29 19:42:52.429: INFO: Waiting for pod pod-f4121df3-1e27-447f-a094-8f683e007d5d to disappear +Apr 29 19:42:52.433: INFO: Pod pod-f4121df3-1e27-447f-a094-8f683e007d5d no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:52.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5163" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":293,"skipped":5247,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:52.446: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:52.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5046" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":294,"skipped":5255,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:52.579: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename certificates +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Apr 29 19:42:53.474: INFO: starting watch +STEP: patching +STEP: updating +Apr 29 19:42:53.503: INFO: waiting for watch events with expected annotations +Apr 29 19:42:53.503: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:53.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-3872" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":295,"skipped":5306,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:53.645: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-e3af4bb3-b367-41b5-b494-df7e4c1f5bc7 +STEP: Creating a pod to test consume configMaps +Apr 29 19:42:53.724: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-34f3e181-428d-47a9-97b6-b66ccd6e88aa" in namespace "projected-6363" to be "Succeeded or Failed" +Apr 29 19:42:53.733: INFO: Pod "pod-projected-configmaps-34f3e181-428d-47a9-97b6-b66ccd6e88aa": Phase="Pending", Reason="", readiness=false. Elapsed: 9.083367ms +Apr 29 19:42:55.738: INFO: Pod "pod-projected-configmaps-34f3e181-428d-47a9-97b6-b66ccd6e88aa": Phase="Running", Reason="", readiness=true. Elapsed: 2.013690924s +Apr 29 19:42:57.742: INFO: Pod "pod-projected-configmaps-34f3e181-428d-47a9-97b6-b66ccd6e88aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01794277s +STEP: Saw pod success +Apr 29 19:42:57.742: INFO: Pod "pod-projected-configmaps-34f3e181-428d-47a9-97b6-b66ccd6e88aa" satisfied condition "Succeeded or Failed" +Apr 29 19:42:57.746: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-configmaps-34f3e181-428d-47a9-97b6-b66ccd6e88aa container agnhost-container: +STEP: delete the pod +Apr 29 19:42:57.769: INFO: Waiting for pod pod-projected-configmaps-34f3e181-428d-47a9-97b6-b66ccd6e88aa to disappear +Apr 29 19:42:57.773: INFO: Pod pod-projected-configmaps-34f3e181-428d-47a9-97b6-b66ccd6e88aa no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:42:57.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6363" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":296,"skipped":5327,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:42:57.789: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +[It] should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-4503, will wait for the garbage collector to delete the pods +Apr 29 19:43:01.911: INFO: Deleting Job.batch foo took: 7.077551ms +Apr 29 19:43:02.013: INFO: Terminating Job.batch foo pods took: 101.55299ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:43:33.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-4503" for this suite. + +• [SLOW TEST:35.448 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":297,"skipped":5346,"failed":0} +SSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:43:33.241: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating service in namespace services-6925 +Apr 29 19:43:33.307: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:43:35.313: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Apr 29 19:43:35.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6925 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Apr 29 19:43:35.956: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Apr 29 19:43:35.956: INFO: stdout: "iptables" +Apr 29 19:43:35.956: INFO: proxyMode: iptables +Apr 29 19:43:35.965: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Apr 29 19:43:35.969: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-6925 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-6925 +I0429 19:43:35.992958 25 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6925, replica count: 3 +I0429 19:43:39.044869 25 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 19:43:39.061: INFO: Creating new exec pod +Apr 29 19:43:42.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6925 exec execpod-affinitygqsdm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Apr 29 19:43:42.281: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Apr 29 19:43:42.281: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:43:42.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6925 exec execpod-affinitygqsdm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.71.144.136 80' +Apr 29 19:43:42.472: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.71.144.136 80\nConnection to 100.71.144.136 80 port [tcp/http] succeeded!\n" +Apr 29 19:43:42.472: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:43:42.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6925 exec execpod-affinitygqsdm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.111.35 32480' +Apr 29 19:43:42.662: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.111.35 32480\nConnection to 10.180.111.35 32480 port [tcp/*] succeeded!\n" +Apr 29 19:43:42.662: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:43:42.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6925 exec execpod-affinitygqsdm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.180.99.66 32480' +Apr 29 19:43:42.829: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.180.99.66 32480\nConnection to 10.180.99.66 32480 port [tcp/*] succeeded!\n" +Apr 29 19:43:42.829: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Apr 29 19:43:42.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6925 exec execpod-affinitygqsdm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.180.111.35:32480/ ; done' +Apr 29 19:43:43.096: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n" +Apr 29 19:43:43.096: INFO: stdout: "\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh\naffinity-nodeport-timeout-hlzdh" +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.096: INFO: Received response from host: affinity-nodeport-timeout-hlzdh +Apr 29 19:43:43.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6925 exec execpod-affinitygqsdm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.180.111.35:32480/' +Apr 29 19:43:43.447: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n" +Apr 29 19:43:43.447: INFO: stdout: "affinity-nodeport-timeout-hlzdh" +Apr 29 19:44:03.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-6925 exec execpod-affinitygqsdm -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.180.111.35:32480/' +Apr 29 19:44:03.693: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.180.111.35:32480/\n" +Apr 29 19:44:03.693: INFO: stdout: "affinity-nodeport-timeout-hftpt" +Apr 29 19:44:03.694: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6925, will wait for the garbage collector to delete the pods +Apr 29 19:44:03.769: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 7.460696ms +Apr 29 19:44:03.869: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.483353ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:44:05.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6925" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:32.771 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":298,"skipped":5355,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:44:06.014: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:44:06.067: INFO: Waiting up to 5m0s for pod "downwardapi-volume-973a3ee0-19a2-41af-b8f3-64d8c1ddaed2" in namespace "downward-api-6403" to be "Succeeded or Failed" +Apr 29 19:44:06.072: INFO: Pod "downwardapi-volume-973a3ee0-19a2-41af-b8f3-64d8c1ddaed2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.826896ms +Apr 29 19:44:08.078: INFO: Pod "downwardapi-volume-973a3ee0-19a2-41af-b8f3-64d8c1ddaed2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011265512s +STEP: Saw pod success +Apr 29 19:44:08.078: INFO: Pod "downwardapi-volume-973a3ee0-19a2-41af-b8f3-64d8c1ddaed2" satisfied condition "Succeeded or Failed" +Apr 29 19:44:08.082: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-973a3ee0-19a2-41af-b8f3-64d8c1ddaed2 container client-container: +STEP: delete the pod +Apr 29 19:44:08.108: INFO: Waiting for pod downwardapi-volume-973a3ee0-19a2-41af-b8f3-64d8c1ddaed2 to disappear +Apr 29 19:44:08.113: INFO: Pod downwardapi-volume-973a3ee0-19a2-41af-b8f3-64d8c1ddaed2 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:44:08.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6403" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":299,"skipped":5372,"failed":0} +SSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:44:08.130: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Creating a NodePort Service +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota +STEP: Ensuring resource quota status captures service creation +STEP: Deleting Services +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:44:19.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2712" for this suite. + +• [SLOW TEST:11.186 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":300,"skipped":5380,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:44:19.321: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating pod pod-subpath-test-configmap-cxx7 +STEP: Creating a pod to test atomic-volume-subpath +Apr 29 19:44:19.379: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cxx7" in namespace "subpath-5139" to be "Succeeded or Failed" +Apr 29 19:44:19.384: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.591364ms +Apr 29 19:44:21.390: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 2.010788355s +Apr 29 19:44:23.396: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 4.017409261s +Apr 29 19:44:25.403: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 6.023829511s +Apr 29 19:44:27.414: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 8.03507104s +Apr 29 19:44:29.420: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 10.041570285s +Apr 29 19:44:31.426: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 12.046962812s +Apr 29 19:44:33.433: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 14.054194869s +Apr 29 19:44:35.440: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 16.060997038s +Apr 29 19:44:37.445: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 18.066149719s +Apr 29 19:44:39.452: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Running", Reason="", readiness=true. Elapsed: 20.073085525s +Apr 29 19:44:41.457: INFO: Pod "pod-subpath-test-configmap-cxx7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.078528645s +STEP: Saw pod success +Apr 29 19:44:41.457: INFO: Pod "pod-subpath-test-configmap-cxx7" satisfied condition "Succeeded or Failed" +Apr 29 19:44:41.462: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-subpath-test-configmap-cxx7 container test-container-subpath-configmap-cxx7: +STEP: delete the pod +Apr 29 19:44:41.481: INFO: Waiting for pod pod-subpath-test-configmap-cxx7 to disappear +Apr 29 19:44:41.484: INFO: Pod pod-subpath-test-configmap-cxx7 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-cxx7 +Apr 29 19:44:41.485: INFO: Deleting pod "pod-subpath-test-configmap-cxx7" in namespace "subpath-5139" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:44:41.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-5139" for this suite. + +• [SLOW TEST:22.184 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":346,"completed":301,"skipped":5412,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:44:41.507: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Apr 29 19:44:41.553: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:44:50.634: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:45:19.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-5834" for this suite. + +• [SLOW TEST:38.286 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":302,"skipped":5447,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:45:19.794: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-59855ce2-9960-4169-b69e-6af01969a186 +STEP: Creating a pod to test consume configMaps +Apr 29 19:45:19.850: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17f07624-5d3c-449e-b6ad-b33e2b5bb89a" in namespace "projected-1353" to be "Succeeded or Failed" +Apr 29 19:45:19.857: INFO: Pod "pod-projected-configmaps-17f07624-5d3c-449e-b6ad-b33e2b5bb89a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.359669ms +Apr 29 19:45:21.865: INFO: Pod "pod-projected-configmaps-17f07624-5d3c-449e-b6ad-b33e2b5bb89a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014750358s +STEP: Saw pod success +Apr 29 19:45:21.865: INFO: Pod "pod-projected-configmaps-17f07624-5d3c-449e-b6ad-b33e2b5bb89a" satisfied condition "Succeeded or Failed" +Apr 29 19:45:21.868: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-configmaps-17f07624-5d3c-449e-b6ad-b33e2b5bb89a container agnhost-container: +STEP: delete the pod +Apr 29 19:45:21.911: INFO: Waiting for pod pod-projected-configmaps-17f07624-5d3c-449e-b6ad-b33e2b5bb89a to disappear +Apr 29 19:45:21.914: INFO: Pod pod-projected-configmaps-17f07624-5d3c-449e-b6ad-b33e2b5bb89a no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:45:21.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1353" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":303,"skipped":5452,"failed":0} +SSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:45:21.925: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:45:21.963: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Apr 29 19:45:24.015: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:45:25.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-9171" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":304,"skipped":5456,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:45:25.129: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:45:25.592: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:45:28.620: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:45:28.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9288" for this suite. +STEP: Destroying namespace "webhook-9288-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":305,"skipped":5461,"failed":0} +SSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:45:28.695: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name configmap-test-volume-5765f7ba-e1e5-45ca-b580-d8da81f126ec +STEP: Creating a pod to test consume configMaps +Apr 29 19:45:28.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-6336ae11-fe0b-41ec-8ab1-d57ec08ac774" in namespace "configmap-7268" to be "Succeeded or Failed" +Apr 29 19:45:28.752: INFO: Pod "pod-configmaps-6336ae11-fe0b-41ec-8ab1-d57ec08ac774": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417489ms +Apr 29 19:45:30.758: INFO: Pod "pod-configmaps-6336ae11-fe0b-41ec-8ab1-d57ec08ac774": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009118574s +STEP: Saw pod success +Apr 29 19:45:30.758: INFO: Pod "pod-configmaps-6336ae11-fe0b-41ec-8ab1-d57ec08ac774" satisfied condition "Succeeded or Failed" +Apr 29 19:45:30.762: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-configmaps-6336ae11-fe0b-41ec-8ab1-d57ec08ac774 container agnhost-container: +STEP: delete the pod +Apr 29 19:45:30.784: INFO: Waiting for pod pod-configmaps-6336ae11-fe0b-41ec-8ab1-d57ec08ac774 to disappear +Apr 29 19:45:30.790: INFO: Pod pod-configmaps-6336ae11-fe0b-41ec-8ab1-d57ec08ac774 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:45:30.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7268" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":306,"skipped":5465,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:45:30.803: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-934 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-934 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-934 +Apr 29 19:45:30.870: INFO: Found 0 stateful pods, waiting for 1 +Apr 29 19:45:40.886: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Apr 29 19:45:40.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-934 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 19:45:41.091: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 19:45:41.091: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 19:45:41.091: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Apr 29 19:45:41.097: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Apr 29 19:45:51.104: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Apr 29 19:45:51.104: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 19:45:51.137: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999942s +Apr 29 19:45:52.145: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.986566869s +Apr 29 19:45:53.152: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978489949s +Apr 29 19:45:54.158: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971635782s +Apr 29 19:45:55.165: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.965598396s +Apr 29 19:45:56.175: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.958074825s +Apr 29 19:45:57.185: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.948268932s +Apr 29 19:45:58.197: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.938119828s +Apr 29 19:45:59.205: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.926230692s +Apr 29 19:46:00.215: INFO: Verifying statefulset ss doesn't scale past 1 for another 918.407924ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-934 +Apr 29 19:46:01.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-934 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Apr 29 19:46:01.460: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Apr 29 19:46:01.460: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Apr 29 19:46:01.460: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Apr 29 19:46:01.467: INFO: Found 1 stateful pods, waiting for 3 +Apr 29 19:46:11.475: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 19:46:11.475: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Apr 29 19:46:11.475: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Apr 29 19:46:11.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-934 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 19:46:11.660: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 19:46:11.660: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 19:46:11.660: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Apr 29 19:46:11.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-934 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 19:46:11.858: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 19:46:11.858: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 19:46:11.858: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Apr 29 19:46:11.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-934 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Apr 29 19:46:12.103: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Apr 29 19:46:12.103: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Apr 29 19:46:12.103: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Apr 29 19:46:12.103: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 19:46:12.108: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 +Apr 29 19:46:22.122: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Apr 29 19:46:22.122: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Apr 29 19:46:22.122: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Apr 29 19:46:22.144: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999587s +Apr 29 19:46:23.152: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990630792s +Apr 29 19:46:24.160: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983157334s +Apr 29 19:46:25.167: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975558864s +Apr 29 19:46:26.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.967252219s +Apr 29 19:46:27.182: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.960210927s +Apr 29 19:46:28.190: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953066568s +Apr 29 19:46:29.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.944239421s +Apr 29 19:46:30.205: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.936205011s +Apr 29 19:46:31.213: INFO: Verifying statefulset ss doesn't scale past 3 for another 929.585074ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-934 +Apr 29 19:46:32.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-934 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Apr 29 19:46:32.405: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Apr 29 19:46:32.405: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Apr 29 19:46:32.405: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Apr 29 19:46:32.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-934 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Apr 29 19:46:32.624: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Apr 29 19:46:32.624: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Apr 29 19:46:32.624: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Apr 29 19:46:32.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=statefulset-934 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Apr 29 19:46:32.796: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Apr 29 19:46:32.796: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Apr 29 19:46:32.796: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Apr 29 19:46:32.796: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Apr 29 19:46:42.825: INFO: Deleting all statefulset in ns statefulset-934 +Apr 29 19:46:42.832: INFO: Scaling statefulset ss to 0 +Apr 29 19:46:42.850: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 19:46:42.855: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:46:42.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-934" for this suite. + +• [SLOW TEST:72.104 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":307,"skipped":5487,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:46:42.908: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-790 +[It] should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating statefulset ss in namespace statefulset-790 +Apr 29 19:46:42.998: INFO: Found 0 stateful pods, waiting for 1 +Apr 29 19:46:53.008: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +STEP: Patch a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Apr 29 19:46:53.078: INFO: Deleting all statefulset in ns statefulset-790 +Apr 29 19:46:53.094: INFO: Scaling statefulset ss to 0 +Apr 29 19:47:03.182: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 19:47:03.187: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:03.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-790" for this suite. + +• [SLOW TEST:20.317 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":308,"skipped":5526,"failed":0} +SSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:03.226: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-7177 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-7177 +I0429 19:47:03.327438 25 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7177, replica count: 2 +I0429 19:47:06.379437 25 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Apr 29 19:47:06.379: INFO: Creating new exec pod +Apr 29 19:47:09.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-7177 exec execpod7l5l2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Apr 29 19:47:09.682: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Apr 29 19:47:09.682: INFO: stdout: "externalname-service-2w5v4" +Apr 29 19:47:09.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=services-7177 exec execpod7l5l2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 100.69.172.77 80' +Apr 29 19:47:09.949: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 100.69.172.77 80\nConnection to 100.69.172.77 80 port [tcp/http] succeeded!\n" +Apr 29 19:47:09.949: INFO: stdout: "externalname-service-drrjd" +Apr 29 19:47:09.949: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:09.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7177" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:6.752 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":309,"skipped":5529,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:09.978: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test substitution in volume subpath +Apr 29 19:47:10.027: INFO: Waiting up to 5m0s for pod "var-expansion-6e68775a-f98d-49d6-af73-1d7b96e0636a" in namespace "var-expansion-9804" to be "Succeeded or Failed" +Apr 29 19:47:10.031: INFO: Pod "var-expansion-6e68775a-f98d-49d6-af73-1d7b96e0636a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.812974ms +Apr 29 19:47:12.039: INFO: Pod "var-expansion-6e68775a-f98d-49d6-af73-1d7b96e0636a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012301561s +STEP: Saw pod success +Apr 29 19:47:12.039: INFO: Pod "var-expansion-6e68775a-f98d-49d6-af73-1d7b96e0636a" satisfied condition "Succeeded or Failed" +Apr 29 19:47:12.112: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod var-expansion-6e68775a-f98d-49d6-af73-1d7b96e0636a container dapi-container: +STEP: delete the pod +Apr 29 19:47:12.248: INFO: Waiting for pod var-expansion-6e68775a-f98d-49d6-af73-1d7b96e0636a to disappear +Apr 29 19:47:12.253: INFO: Pod var-expansion-6e68775a-f98d-49d6-af73-1d7b96e0636a no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:12.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9804" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":310,"skipped":5541,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:12.267: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:47:12.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fed52285-9db2-4798-a562-7b6fa8158c1b" in namespace "downward-api-6969" to be "Succeeded or Failed" +Apr 29 19:47:12.326: INFO: Pod "downwardapi-volume-fed52285-9db2-4798-a562-7b6fa8158c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.719153ms +Apr 29 19:47:14.331: INFO: Pod "downwardapi-volume-fed52285-9db2-4798-a562-7b6fa8158c1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013983511s +STEP: Saw pod success +Apr 29 19:47:14.331: INFO: Pod "downwardapi-volume-fed52285-9db2-4798-a562-7b6fa8158c1b" satisfied condition "Succeeded or Failed" +Apr 29 19:47:14.335: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-fed52285-9db2-4798-a562-7b6fa8158c1b container client-container: +STEP: delete the pod +Apr 29 19:47:14.354: INFO: Waiting for pod downwardapi-volume-fed52285-9db2-4798-a562-7b6fa8158c1b to disappear +Apr 29 19:47:14.357: INFO: Pod downwardapi-volume-fed52285-9db2-4798-a562-7b6fa8158c1b no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:14.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6969" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":311,"skipped":5568,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:14.370: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward api env vars +Apr 29 19:47:14.408: INFO: Waiting up to 5m0s for pod "downward-api-bd586bb5-22c1-4745-8bc4-8488310f2865" in namespace "downward-api-3448" to be "Succeeded or Failed" +Apr 29 19:47:14.413: INFO: Pod "downward-api-bd586bb5-22c1-4745-8bc4-8488310f2865": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329292ms +Apr 29 19:47:16.417: INFO: Pod "downward-api-bd586bb5-22c1-4745-8bc4-8488310f2865": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008851104s +STEP: Saw pod success +Apr 29 19:47:16.417: INFO: Pod "downward-api-bd586bb5-22c1-4745-8bc4-8488310f2865" satisfied condition "Succeeded or Failed" +Apr 29 19:47:16.423: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downward-api-bd586bb5-22c1-4745-8bc4-8488310f2865 container dapi-container: +STEP: delete the pod +Apr 29 19:47:16.450: INFO: Waiting for pod downward-api-bd586bb5-22c1-4745-8bc4-8488310f2865 to disappear +Apr 29 19:47:16.453: INFO: Pod downward-api-bd586bb5-22c1-4745-8bc4-8488310f2865 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:16.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3448" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":312,"skipped":5578,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:16.471: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating Agnhost RC +Apr 29 19:47:16.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9681 create -f -' +Apr 29 19:47:18.584: INFO: stderr: "" +Apr 29 19:47:18.584: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Apr 29 19:47:19.596: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:47:19.596: INFO: Found 0 / 1 +Apr 29 19:47:20.590: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:47:20.590: INFO: Found 1 / 1 +Apr 29 19:47:20.590: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Apr 29 19:47:20.595: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:47:20.595: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Apr 29 19:47:20.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9681 patch pod agnhost-primary-fknc2 -p {"metadata":{"annotations":{"x":"y"}}}' +Apr 29 19:47:20.680: INFO: stderr: "" +Apr 29 19:47:20.680: INFO: stdout: "pod/agnhost-primary-fknc2 patched\n" +STEP: checking annotations +Apr 29 19:47:20.686: INFO: Selector matched 1 pods for map[app:agnhost] +Apr 29 19:47:20.686: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:20.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9681" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":346,"completed":313,"skipped":5594,"failed":0} +SS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:20.701: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating projection with secret that has name secret-emptykey-test-d883250f-562d-4bea-afd9-0c33482cf490 +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:20.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-8854" for this suite. +•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":314,"skipped":5596,"failed":0} +SSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:20.752: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test env composition +Apr 29 19:47:20.804: INFO: Waiting up to 5m0s for pod "var-expansion-055c42be-8fb5-4087-b5f4-bddf34d0dec1" in namespace "var-expansion-1201" to be "Succeeded or Failed" +Apr 29 19:47:20.812: INFO: Pod "var-expansion-055c42be-8fb5-4087-b5f4-bddf34d0dec1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306133ms +Apr 29 19:47:22.817: INFO: Pod "var-expansion-055c42be-8fb5-4087-b5f4-bddf34d0dec1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013264445s +Apr 29 19:47:24.823: INFO: Pod "var-expansion-055c42be-8fb5-4087-b5f4-bddf34d0dec1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019109094s +STEP: Saw pod success +Apr 29 19:47:24.823: INFO: Pod "var-expansion-055c42be-8fb5-4087-b5f4-bddf34d0dec1" satisfied condition "Succeeded or Failed" +Apr 29 19:47:24.828: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod var-expansion-055c42be-8fb5-4087-b5f4-bddf34d0dec1 container dapi-container: +STEP: delete the pod +Apr 29 19:47:24.850: INFO: Waiting for pod var-expansion-055c42be-8fb5-4087-b5f4-bddf34d0dec1 to disappear +Apr 29 19:47:24.854: INFO: Pod var-expansion-055c42be-8fb5-4087-b5f4-bddf34d0dec1 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:24.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-1201" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":315,"skipped":5600,"failed":0} + +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:24.867: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:47:24.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab1b544d-df28-46a2-b688-ec701bfadfa8" in namespace "projected-7149" to be "Succeeded or Failed" +Apr 29 19:47:24.919: INFO: Pod "downwardapi-volume-ab1b544d-df28-46a2-b688-ec701bfadfa8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.734335ms +Apr 29 19:47:26.924: INFO: Pod "downwardapi-volume-ab1b544d-df28-46a2-b688-ec701bfadfa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009883304s +Apr 29 19:47:28.931: INFO: Pod "downwardapi-volume-ab1b544d-df28-46a2-b688-ec701bfadfa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016617338s +STEP: Saw pod success +Apr 29 19:47:28.931: INFO: Pod "downwardapi-volume-ab1b544d-df28-46a2-b688-ec701bfadfa8" satisfied condition "Succeeded or Failed" +Apr 29 19:47:28.935: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-ab1b544d-df28-46a2-b688-ec701bfadfa8 container client-container: +STEP: delete the pod +Apr 29 19:47:28.959: INFO: Waiting for pod downwardapi-volume-ab1b544d-df28-46a2-b688-ec701bfadfa8 to disappear +Apr 29 19:47:28.963: INFO: Pod downwardapi-volume-ab1b544d-df28-46a2-b688-ec701bfadfa8 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:28.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7149" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":316,"skipped":5600,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:28.976: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Apr 29 19:47:29.028: INFO: The status of Pod labelsupdate508c8f64-c16f-4203-9624-3c6bfed2d20a is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:47:31.035: INFO: The status of Pod labelsupdate508c8f64-c16f-4203-9624-3c6bfed2d20a is Running (Ready = true) +Apr 29 19:47:31.571: INFO: Successfully updated pod "labelsupdate508c8f64-c16f-4203-9624-3c6bfed2d20a" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:33.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7377" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":317,"skipped":5607,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:33.601: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:47:34.176: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:47:37.218: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:37.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-8030" for this suite. +STEP: Destroying namespace "webhook-8030-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":318,"skipped":5614,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:37.347: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:47:37.736: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:47:40.771: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:47:40.778: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:43.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-9854" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 + +• [SLOW TEST:6.675 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":319,"skipped":5638,"failed":0} +SSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:44.023: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:47:44.073: INFO: Creating ReplicaSet my-hostname-basic-3a0da302-23c4-404f-808e-1e0b72cb49dc +Apr 29 19:47:44.093: INFO: Pod name my-hostname-basic-3a0da302-23c4-404f-808e-1e0b72cb49dc: Found 0 pods out of 1 +Apr 29 19:47:49.106: INFO: Pod name my-hostname-basic-3a0da302-23c4-404f-808e-1e0b72cb49dc: Found 1 pods out of 1 +Apr 29 19:47:49.106: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3a0da302-23c4-404f-808e-1e0b72cb49dc" is running +Apr 29 19:47:49.114: INFO: Pod "my-hostname-basic-3a0da302-23c4-404f-808e-1e0b72cb49dc-lj44t" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 19:47:44 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 19:47:46 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 19:47:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-29 19:47:44 +0000 UTC Reason: Message:}]) +Apr 29 19:47:49.115: INFO: Trying to dial the pod +Apr 29 19:47:54.133: INFO: Controller my-hostname-basic-3a0da302-23c4-404f-808e-1e0b72cb49dc: Got expected result from replica 1 [my-hostname-basic-3a0da302-23c4-404f-808e-1e0b72cb49dc-lj44t]: "my-hostname-basic-3a0da302-23c4-404f-808e-1e0b72cb49dc-lj44t", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:47:54.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-6430" for this suite. + +• [SLOW TEST:10.124 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":320,"skipped":5641,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:47:54.149: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Performing setup for networking test in namespace pod-network-test-265 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Apr 29 19:47:54.202: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Apr 29 19:47:54.248: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:47:56.254: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:47:58.255: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:48:00.256: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:48:02.255: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:48:04.254: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:48:06.255: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:48:08.254: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:48:10.255: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:48:12.254: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:48:14.253: INFO: The status of Pod netserver-0 is Running (Ready = false) +Apr 29 19:48:16.254: INFO: The status of Pod netserver-0 is Running (Ready = true) +Apr 29 19:48:16.265: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Apr 29 19:48:18.290: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Apr 29 19:48:18.290: INFO: Breadth first check of 100.96.0.166 on host 10.180.111.35... +Apr 29 19:48:18.296: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.191:9080/dial?request=hostname&protocol=udp&host=100.96.0.166&port=8081&tries=1'] Namespace:pod-network-test-265 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:48:18.296: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:48:18.421: INFO: Waiting for responses: map[] +Apr 29 19:48:18.421: INFO: reached 100.96.0.166 after 0/1 tries +Apr 29 19:48:18.421: INFO: Breadth first check of 100.96.1.190 on host 10.180.99.66... +Apr 29 19:48:18.426: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.1.191:9080/dial?request=hostname&protocol=udp&host=100.96.1.190&port=8081&tries=1'] Namespace:pod-network-test-265 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Apr 29 19:48:18.426: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +Apr 29 19:48:18.533: INFO: Waiting for responses: map[] +Apr 29 19:48:18.533: INFO: reached 100.96.1.190 after 0/1 tries +Apr 29 19:48:18.533: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:48:18.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-265" for this suite. + +• [SLOW TEST:24.400 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":321,"skipped":5657,"failed":0} +SSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:48:18.549: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:48:18.586: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Apr 29 19:48:27.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 --namespace=crd-publish-openapi-5746 create -f -' +Apr 29 19:48:29.569: INFO: stderr: "" +Apr 29 19:48:29.569: INFO: stdout: "e2e-test-crd-publish-openapi-5521-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Apr 29 19:48:29.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 --namespace=crd-publish-openapi-5746 delete e2e-test-crd-publish-openapi-5521-crds test-foo' +Apr 29 19:48:29.657: INFO: stderr: "" +Apr 29 19:48:29.657: INFO: stdout: "e2e-test-crd-publish-openapi-5521-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Apr 29 19:48:29.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 --namespace=crd-publish-openapi-5746 apply -f -' +Apr 29 19:48:30.105: INFO: stderr: "" +Apr 29 19:48:30.105: INFO: stdout: "e2e-test-crd-publish-openapi-5521-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Apr 29 19:48:30.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 --namespace=crd-publish-openapi-5746 delete e2e-test-crd-publish-openapi-5521-crds test-foo' +Apr 29 19:48:30.207: INFO: stderr: "" +Apr 29 19:48:30.207: INFO: stdout: "e2e-test-crd-publish-openapi-5521-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Apr 29 19:48:30.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 --namespace=crd-publish-openapi-5746 create -f -' +Apr 29 19:48:30.904: INFO: rc: 1 +Apr 29 19:48:30.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 --namespace=crd-publish-openapi-5746 apply -f -' +Apr 29 19:48:31.300: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Apr 29 19:48:31.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 --namespace=crd-publish-openapi-5746 create -f -' +Apr 29 19:48:31.699: INFO: rc: 1 +Apr 29 19:48:31.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 --namespace=crd-publish-openapi-5746 apply -f -' +Apr 29 19:48:32.199: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Apr 29 19:48:32.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 explain e2e-test-crd-publish-openapi-5521-crds' +Apr 29 19:48:32.605: INFO: stderr: "" +Apr 29 19:48:32.605: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5521-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Apr 29 19:48:32.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 explain e2e-test-crd-publish-openapi-5521-crds.metadata' +Apr 29 19:48:33.000: INFO: stderr: "" +Apr 29 19:48:33.000: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5521-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Apr 29 19:48:33.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 explain e2e-test-crd-publish-openapi-5521-crds.spec' +Apr 29 19:48:33.399: INFO: stderr: "" +Apr 29 19:48:33.399: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5521-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Apr 29 19:48:33.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 explain e2e-test-crd-publish-openapi-5521-crds.spec.bars' +Apr 29 19:48:33.777: INFO: stderr: "" +Apr 29 19:48:33.777: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5521-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Apr 29 19:48:33.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-5746 explain e2e-test-crd-publish-openapi-5521-crds.spec.bars2' +Apr 29 19:48:34.149: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:48:41.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-5746" for this suite. + +• [SLOW TEST:23.292 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":322,"skipped":5660,"failed":0} +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:48:41.842: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Apr 29 19:48:41.886: INFO: Waiting up to 5m0s for pod "pod-fa0a7e06-33ae-47dc-98b8-824061061f44" in namespace "emptydir-9906" to be "Succeeded or Failed" +Apr 29 19:48:41.891: INFO: Pod "pod-fa0a7e06-33ae-47dc-98b8-824061061f44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.850763ms +Apr 29 19:48:43.899: INFO: Pod "pod-fa0a7e06-33ae-47dc-98b8-824061061f44": Phase="Running", Reason="", readiness=true. Elapsed: 2.012604081s +Apr 29 19:48:45.905: INFO: Pod "pod-fa0a7e06-33ae-47dc-98b8-824061061f44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018495731s +STEP: Saw pod success +Apr 29 19:48:45.905: INFO: Pod "pod-fa0a7e06-33ae-47dc-98b8-824061061f44" satisfied condition "Succeeded or Failed" +Apr 29 19:48:45.910: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-fa0a7e06-33ae-47dc-98b8-824061061f44 container test-container: +STEP: delete the pod +Apr 29 19:48:45.949: INFO: Waiting for pod pod-fa0a7e06-33ae-47dc-98b8-824061061f44 to disappear +Apr 29 19:48:45.955: INFO: Pod pod-fa0a7e06-33ae-47dc-98b8-824061061f44 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:48:45.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9906" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":323,"skipped":5664,"failed":0} +SSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:48:45.968: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating configMap with name projected-configmap-test-volume-map-7ae1f44a-c450-4055-94d3-e4b1adc1e79e +STEP: Creating a pod to test consume configMaps +Apr 29 19:48:46.032: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b55216bd-7860-49fa-9c01-70a894cd75e8" in namespace "projected-6701" to be "Succeeded or Failed" +Apr 29 19:48:46.041: INFO: Pod "pod-projected-configmaps-b55216bd-7860-49fa-9c01-70a894cd75e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.609752ms +Apr 29 19:48:48.048: INFO: Pod "pod-projected-configmaps-b55216bd-7860-49fa-9c01-70a894cd75e8": Phase="Running", Reason="", readiness=true. Elapsed: 2.01574258s +Apr 29 19:48:50.054: INFO: Pod "pod-projected-configmaps-b55216bd-7860-49fa-9c01-70a894cd75e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021917834s +STEP: Saw pod success +Apr 29 19:48:50.054: INFO: Pod "pod-projected-configmaps-b55216bd-7860-49fa-9c01-70a894cd75e8" satisfied condition "Succeeded or Failed" +Apr 29 19:48:50.061: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-projected-configmaps-b55216bd-7860-49fa-9c01-70a894cd75e8 container agnhost-container: +STEP: delete the pod +Apr 29 19:48:50.081: INFO: Waiting for pod pod-projected-configmaps-b55216bd-7860-49fa-9c01-70a894cd75e8 to disappear +Apr 29 19:48:50.085: INFO: Pod pod-projected-configmaps-b55216bd-7860-49fa-9c01-70a894cd75e8 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:48:50.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6701" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":324,"skipped":5668,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:48:50.099: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Apr 29 19:48:50.518: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Apr 29 19:48:53.549: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Apr 29 19:48:55.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=webhook-6507 attach --namespace=webhook-6507 to-be-attached-pod -i -c=container1' +Apr 29 19:48:55.693: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:48:55.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6507" for this suite. +STEP: Destroying namespace "webhook-6507-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:5.668 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":325,"skipped":5724,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:48:55.768: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating the pod +Apr 29 19:48:55.812: INFO: PodSpec: initContainers in spec.initContainers +Apr 29 19:49:40.484: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c34f0aaa-0fdc-4bf5-bc9d-48acb9434839", GenerateName:"", Namespace:"init-container-5946", SelfLink:"", UID:"74f147c8-dffc-4ad3-b39a-6128ace55696", ResourceVersion:"768554", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63786858535, loc:(*time.Location)(0xa0a1d40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"812673569"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00d142618), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00d142630), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00d142648), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00d142660), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-4h8k4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00753a980), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-4h8k4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-4h8k4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-4h8k4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00a03a9a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tkg-mgmt-vc-md-0-59d8b7c778-msxpc", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a7ae00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00a03aa30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00a03aa50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00a03aa58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00a03aa5c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0072b37a0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786858535, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786858535, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786858535, loc:(*time.Location)(0xa0a1d40)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63786858535, loc:(*time.Location)(0xa0a1d40)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.180.99.66", PodIP:"100.96.1.196", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.1.196"}}, StartTime:(*v1.Time)(0xc00d1426c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a7af50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a7afc0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://9619e39eccc77b99e9f7d053d40dd1f3652351087fafa00dacb0d4a66d3012a6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00753aa00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00753a9e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.5", ImageID:"", ContainerID:"", Started:(*bool)(0xc00a03ab0f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:49:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-5946" for this suite. + +• [SLOW TEST:44.751 seconds] +[sig-node] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":326,"skipped":5742,"failed":0} +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:49:40.522: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 +STEP: creating the pod +Apr 29 19:49:40.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7828 create -f -' +Apr 29 19:49:42.610: INFO: stderr: "" +Apr 29 19:49:42.610: INFO: stdout: "pod/pause created\n" +Apr 29 19:49:42.610: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Apr 29 19:49:42.610: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7828" to be "running and ready" +Apr 29 19:49:42.617: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458004ms +Apr 29 19:49:44.624: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.013700101s +Apr 29 19:49:44.624: INFO: Pod "pause" satisfied condition "running and ready" +Apr 29 19:49:44.624: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: adding the label testing-label with value testing-label-value to a pod +Apr 29 19:49:44.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7828 label pods pause testing-label=testing-label-value' +Apr 29 19:49:44.724: INFO: stderr: "" +Apr 29 19:49:44.724: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Apr 29 19:49:44.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7828 get pod pause -L testing-label' +Apr 29 19:49:44.812: INFO: stderr: "" +Apr 29 19:49:44.812: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod +Apr 29 19:49:44.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7828 label pods pause testing-label-' +Apr 29 19:49:44.909: INFO: stderr: "" +Apr 29 19:49:44.909: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Apr 29 19:49:44.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7828 get pod pause -L testing-label' +Apr 29 19:49:44.996: INFO: stderr: "" +Apr 29 19:49:44.996: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 +STEP: using delete to clean up resources +Apr 29 19:49:44.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7828 delete --grace-period=0 --force -f -' +Apr 29 19:49:45.099: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Apr 29 19:49:45.099: INFO: stdout: "pod \"pause\" force deleted\n" +Apr 29 19:49:45.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7828 get rc,svc -l name=pause --no-headers' +Apr 29 19:49:45.198: INFO: stderr: "No resources found in kubectl-7828 namespace.\n" +Apr 29 19:49:45.198: INFO: stdout: "" +Apr 29 19:49:45.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-7828 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Apr 29 19:49:45.284: INFO: stderr: "" +Apr 29 19:49:45.284: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:49:45.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7828" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":346,"completed":327,"skipped":5742,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:49:45.302: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating secret with name secret-test-4407ed9a-c4c9-4903-b78e-24e486f103fa +STEP: Creating a pod to test consume secrets +Apr 29 19:49:45.366: INFO: Waiting up to 5m0s for pod "pod-secrets-6856005c-287b-4a45-94d8-1c310a174f31" in namespace "secrets-2095" to be "Succeeded or Failed" +Apr 29 19:49:45.375: INFO: Pod "pod-secrets-6856005c-287b-4a45-94d8-1c310a174f31": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130614ms +Apr 29 19:49:47.382: INFO: Pod "pod-secrets-6856005c-287b-4a45-94d8-1c310a174f31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015441777s +Apr 29 19:49:49.390: INFO: Pod "pod-secrets-6856005c-287b-4a45-94d8-1c310a174f31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02299328s +STEP: Saw pod success +Apr 29 19:49:49.390: INFO: Pod "pod-secrets-6856005c-287b-4a45-94d8-1c310a174f31" satisfied condition "Succeeded or Failed" +Apr 29 19:49:49.395: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-secrets-6856005c-287b-4a45-94d8-1c310a174f31 container secret-volume-test: +STEP: delete the pod +Apr 29 19:49:49.418: INFO: Waiting for pod pod-secrets-6856005c-287b-4a45-94d8-1c310a174f31 to disappear +Apr 29 19:49:49.422: INFO: Pod pod-secrets-6856005c-287b-4a45-94d8-1c310a174f31 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:49:49.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2095" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":328,"skipped":5764,"failed":0} +SSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:49:49.437: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Apr 29 19:49:49.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9114 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Apr 29 19:49:49.591: INFO: stderr: "" +Apr 29 19:49:49.591: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Apr 29 19:49:49.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9114 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' +Apr 29 19:49:50.612: INFO: stderr: "" +Apr 29 19:49:50.612: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 +Apr 29 19:49:50.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=kubectl-9114 delete pods e2e-test-httpd-pod' +Apr 29 19:49:52.569: INFO: stderr: "" +Apr 29 19:49:52.569: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:49:52.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-9114" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":329,"skipped":5771,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:49:52.588: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Apr 29 19:49:52.666: INFO: observed Pod pod-test in namespace pods-8636 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Apr 29 19:49:52.666: INFO: observed Pod pod-test in namespace pods-8636 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 19:49:52 +0000 UTC }] +Apr 29 19:49:52.676: INFO: observed Pod pod-test in namespace pods-8636 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 19:49:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 19:49:52 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-29 19:49:52 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 19:49:52 +0000 UTC }] +Apr 29 19:49:54.565: INFO: Found Pod pod-test in namespace pods-8636 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 19:49:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 19:49:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 19:49:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-29 19:49:52 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +Apr 29 19:49:54.578: INFO: observed event type ADDED +STEP: getting the Pod and ensuring that it's patched +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Apr 29 19:49:54.614: INFO: observed event type ADDED +Apr 29 19:49:54.614: INFO: observed event type MODIFIED +Apr 29 19:49:54.615: INFO: observed event type MODIFIED +Apr 29 19:49:54.615: INFO: observed event type MODIFIED +Apr 29 19:49:54.616: INFO: observed event type MODIFIED +Apr 29 19:49:54.616: INFO: observed event type MODIFIED +Apr 29 19:49:54.616: INFO: observed event type MODIFIED +Apr 29 19:49:56.574: INFO: observed event type MODIFIED +Apr 29 19:49:57.581: INFO: observed event type MODIFIED +Apr 29 19:49:57.591: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:49:57.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8636" for this suite. + +• [SLOW TEST:5.048 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":330,"skipped":5787,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:49:57.642: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:49:57.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebd8756a-6492-4b5c-8c1d-bb6c3cc04795" in namespace "downward-api-5786" to be "Succeeded or Failed" +Apr 29 19:49:57.767: INFO: Pod "downwardapi-volume-ebd8756a-6492-4b5c-8c1d-bb6c3cc04795": Phase="Pending", Reason="", readiness=false. Elapsed: 9.541326ms +Apr 29 19:49:59.776: INFO: Pod "downwardapi-volume-ebd8756a-6492-4b5c-8c1d-bb6c3cc04795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017873478s +STEP: Saw pod success +Apr 29 19:49:59.776: INFO: Pod "downwardapi-volume-ebd8756a-6492-4b5c-8c1d-bb6c3cc04795" satisfied condition "Succeeded or Failed" +Apr 29 19:49:59.780: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-ebd8756a-6492-4b5c-8c1d-bb6c3cc04795 container client-container: +STEP: delete the pod +Apr 29 19:49:59.800: INFO: Waiting for pod downwardapi-volume-ebd8756a-6492-4b5c-8c1d-bb6c3cc04795 to disappear +Apr 29 19:49:59.804: INFO: Pod downwardapi-volume-ebd8756a-6492-4b5c-8c1d-bb6c3cc04795 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:49:59.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5786" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":331,"skipped":5816,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:49:59.820: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:49:59.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-7515" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":332,"skipped":5832,"failed":0} +SSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:49:59.910: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test downward API volume plugin +Apr 29 19:49:59.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39379f5a-1261-4b2d-af3e-ff996fad46cf" in namespace "downward-api-2576" to be "Succeeded or Failed" +Apr 29 19:49:59.959: INFO: Pod "downwardapi-volume-39379f5a-1261-4b2d-af3e-ff996fad46cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447953ms +Apr 29 19:50:01.966: INFO: Pod "downwardapi-volume-39379f5a-1261-4b2d-af3e-ff996fad46cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01210111s +STEP: Saw pod success +Apr 29 19:50:01.967: INFO: Pod "downwardapi-volume-39379f5a-1261-4b2d-af3e-ff996fad46cf" satisfied condition "Succeeded or Failed" +Apr 29 19:50:01.974: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod downwardapi-volume-39379f5a-1261-4b2d-af3e-ff996fad46cf container client-container: +STEP: delete the pod +Apr 29 19:50:01.998: INFO: Waiting for pod downwardapi-volume-39379f5a-1261-4b2d-af3e-ff996fad46cf to disappear +Apr 29 19:50:02.005: INFO: Pod downwardapi-volume-39379f5a-1261-4b2d-af3e-ff996fad46cf no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:50:02.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2576" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":333,"skipped":5837,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:50:02.029: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Apr 29 19:50:02.135: INFO: Number of nodes with available pods: 0 +Apr 29 19:50:02.135: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:50:03.146: INFO: Number of nodes with available pods: 0 +Apr 29 19:50:03.146: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:50:04.149: INFO: Number of nodes with available pods: 2 +Apr 29 19:50:04.149: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Apr 29 19:50:04.179: INFO: Number of nodes with available pods: 1 +Apr 29 19:50:04.179: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:50:05.198: INFO: Number of nodes with available pods: 1 +Apr 29 19:50:05.198: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:50:06.195: INFO: Number of nodes with available pods: 1 +Apr 29 19:50:06.195: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:50:07.194: INFO: Number of nodes with available pods: 1 +Apr 29 19:50:07.195: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:50:08.191: INFO: Number of nodes with available pods: 1 +Apr 29 19:50:08.191: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:50:09.191: INFO: Number of nodes with available pods: 2 +Apr 29 19:50:09.192: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4807, will wait for the garbage collector to delete the pods +Apr 29 19:50:09.260: INFO: Deleting DaemonSet.extensions daemon-set took: 7.356017ms +Apr 29 19:50:09.361: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.031792ms +Apr 29 19:50:11.867: INFO: Number of nodes with available pods: 0 +Apr 29 19:50:11.867: INFO: Number of running nodes: 0, number of available pods: 0 +Apr 29 19:50:11.873: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"769047"},"items":null} + +Apr 29 19:50:11.877: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"769047"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:50:11.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4807" for this suite. + +• [SLOW TEST:9.876 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":334,"skipped":5850,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:50:11.909: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:50:11.975: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Apr 29 19:50:12.004: INFO: Number of nodes with available pods: 0 +Apr 29 19:50:12.004: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:50:13.018: INFO: Number of nodes with available pods: 0 +Apr 29 19:50:13.018: INFO: Node tkg-mgmt-vc-control-plane-4czbf is running more than one daemon pod +Apr 29 19:50:14.016: INFO: Number of nodes with available pods: 2 +Apr 29 19:50:14.016: INFO: Number of running nodes: 2, number of available pods: 2 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Apr 29 19:50:14.049: INFO: Wrong image for pod: daemon-set-5tbf9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Apr 29 19:50:14.049: INFO: Wrong image for pod: daemon-set-cp27c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Apr 29 19:50:15.062: INFO: Wrong image for pod: daemon-set-5tbf9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Apr 29 19:50:16.063: INFO: Wrong image for pod: daemon-set-5tbf9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Apr 29 19:50:17.064: INFO: Wrong image for pod: daemon-set-5tbf9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Apr 29 19:50:18.062: INFO: Wrong image for pod: daemon-set-5tbf9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. +Apr 29 19:50:18.063: INFO: Pod daemon-set-tnjwg is not available +Apr 29 19:50:20.063: INFO: Pod daemon-set-t9fgl is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Apr 29 19:50:20.085: INFO: Number of nodes with available pods: 1 +Apr 29 19:50:20.085: INFO: Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc is running more than one daemon pod +Apr 29 19:50:21.100: INFO: Number of nodes with available pods: 1 +Apr 29 19:50:21.100: INFO: Node tkg-mgmt-vc-md-0-59d8b7c778-msxpc is running more than one daemon pod +Apr 29 19:50:22.098: INFO: Number of nodes with available pods: 2 +Apr 29 19:50:22.098: INFO: Number of running nodes: 2, number of available pods: 2 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4739, will wait for the garbage collector to delete the pods +Apr 29 19:50:22.184: INFO: Deleting DaemonSet.extensions daemon-set took: 8.11151ms +Apr 29 19:50:22.285: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.320086ms +Apr 29 19:50:24.790: INFO: Number of nodes with available pods: 0 +Apr 29 19:50:24.790: INFO: Number of running nodes: 0, number of available pods: 0 +Apr 29 19:50:24.793: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"769255"},"items":null} + +Apr 29 19:50:24.803: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"769255"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:50:24.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4739" for this suite. + +• [SLOW TEST:12.928 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":335,"skipped":5887,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:50:24.837: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:50:24.887: INFO: The status of Pod server-envvars-18abbeea-0bcd-4dee-949c-5324838092ce is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:50:26.892: INFO: The status of Pod server-envvars-18abbeea-0bcd-4dee-949c-5324838092ce is Running (Ready = true) +Apr 29 19:50:26.915: INFO: Waiting up to 5m0s for pod "client-envvars-a4fa845e-c3f9-485d-b9d0-e9a44483fb11" in namespace "pods-6412" to be "Succeeded or Failed" +Apr 29 19:50:26.930: INFO: Pod "client-envvars-a4fa845e-c3f9-485d-b9d0-e9a44483fb11": Phase="Pending", Reason="", readiness=false. Elapsed: 15.542087ms +Apr 29 19:50:28.937: INFO: Pod "client-envvars-a4fa845e-c3f9-485d-b9d0-e9a44483fb11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021758089s +STEP: Saw pod success +Apr 29 19:50:28.937: INFO: Pod "client-envvars-a4fa845e-c3f9-485d-b9d0-e9a44483fb11" satisfied condition "Succeeded or Failed" +Apr 29 19:50:28.942: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod client-envvars-a4fa845e-c3f9-485d-b9d0-e9a44483fb11 container env3cont: +STEP: delete the pod +Apr 29 19:50:28.963: INFO: Waiting for pod client-envvars-a4fa845e-c3f9-485d-b9d0-e9a44483fb11 to disappear +Apr 29 19:50:28.967: INFO: Pod client-envvars-a4fa845e-c3f9-485d-b9d0-e9a44483fb11 no longer exists +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:50:28.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-6412" for this suite. +•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":336,"skipped":5908,"failed":0} +S +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:50:28.981: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Apr 29 19:50:29.040: INFO: Waiting up to 1m0s for all nodes to be ready +Apr 29 19:51:29.117: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:51:29.129: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:51:29.224: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Apr 29 19:51:29.231: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:51:29.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-6831" for this suite. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:51:29.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-5735" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:60.379 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":337,"skipped":5909,"failed":0} +S +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:51:29.361: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:51:29.406: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Apr 29 19:51:37.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-3361 --namespace=crd-publish-openapi-3361 create -f -' +Apr 29 19:51:40.520: INFO: stderr: "" +Apr 29 19:51:40.520: INFO: stdout: "e2e-test-crd-publish-openapi-1076-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Apr 29 19:51:40.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-3361 --namespace=crd-publish-openapi-3361 delete e2e-test-crd-publish-openapi-1076-crds test-cr' +Apr 29 19:51:40.609: INFO: stderr: "" +Apr 29 19:51:40.609: INFO: stdout: "e2e-test-crd-publish-openapi-1076-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Apr 29 19:51:40.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-3361 --namespace=crd-publish-openapi-3361 apply -f -' +Apr 29 19:51:41.077: INFO: stderr: "" +Apr 29 19:51:41.077: INFO: stdout: "e2e-test-crd-publish-openapi-1076-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Apr 29 19:51:41.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-3361 --namespace=crd-publish-openapi-3361 delete e2e-test-crd-publish-openapi-1076-crds test-cr' +Apr 29 19:51:41.156: INFO: stderr: "" +Apr 29 19:51:41.156: INFO: stdout: "e2e-test-crd-publish-openapi-1076-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Apr 29 19:51:41.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-3361 explain e2e-test-crd-publish-openapi-1076-crds' +Apr 29 19:51:41.543: INFO: stderr: "" +Apr 29 19:51:41.543: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1076-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:51:49.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-3361" for this suite. + +• [SLOW TEST:19.930 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":338,"skipped":5910,"failed":0} +SS +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:51:49.292: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:51:49.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-6272" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":339,"skipped":5912,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:51:49.350: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Apr 29 19:51:49.397: INFO: Waiting up to 1m0s for all nodes to be ready +Apr 29 19:52:49.465: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Create pods that use 4/5 of node resources. +Apr 29 19:52:49.501: INFO: Created pod: pod0-0-sched-preemption-low-priority +Apr 29 19:52:49.508: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Apr 29 19:52:49.536: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Apr 29 19:52:49.545: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:53:07.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-3993" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:78.336 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":340,"skipped":5927,"failed":0} +SSS +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:53:07.687: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename security-context +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Apr 29 19:53:07.741: INFO: Waiting up to 5m0s for pod "security-context-3d367067-3ce3-45ca-9dbd-112ea7254eac" in namespace "security-context-8912" to be "Succeeded or Failed" +Apr 29 19:53:07.748: INFO: Pod "security-context-3d367067-3ce3-45ca-9dbd-112ea7254eac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.924717ms +Apr 29 19:53:09.754: INFO: Pod "security-context-3d367067-3ce3-45ca-9dbd-112ea7254eac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013345062s +STEP: Saw pod success +Apr 29 19:53:09.755: INFO: Pod "security-context-3d367067-3ce3-45ca-9dbd-112ea7254eac" satisfied condition "Succeeded or Failed" +Apr 29 19:53:09.759: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod security-context-3d367067-3ce3-45ca-9dbd-112ea7254eac container test-container: +STEP: delete the pod +Apr 29 19:53:09.791: INFO: Waiting for pod security-context-3d367067-3ce3-45ca-9dbd-112ea7254eac to disappear +Apr 29 19:53:09.795: INFO: Pod security-context-3d367067-3ce3-45ca-9dbd-112ea7254eac no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:53:09.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-8912" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":341,"skipped":5930,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:53:09.807: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +Apr 29 19:53:09.844: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Apr 29 19:53:17.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-2636 --namespace=crd-publish-openapi-2636 create -f -' +Apr 29 19:53:19.650: INFO: stderr: "" +Apr 29 19:53:19.650: INFO: stdout: "e2e-test-crd-publish-openapi-4333-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Apr 29 19:53:19.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-2636 --namespace=crd-publish-openapi-2636 delete e2e-test-crd-publish-openapi-4333-crds test-cr' +Apr 29 19:53:19.726: INFO: stderr: "" +Apr 29 19:53:19.726: INFO: stdout: "e2e-test-crd-publish-openapi-4333-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Apr 29 19:53:19.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-2636 --namespace=crd-publish-openapi-2636 apply -f -' +Apr 29 19:53:20.158: INFO: stderr: "" +Apr 29 19:53:20.158: INFO: stdout: "e2e-test-crd-publish-openapi-4333-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Apr 29 19:53:20.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-2636 --namespace=crd-publish-openapi-2636 delete e2e-test-crd-publish-openapi-4333-crds test-cr' +Apr 29 19:53:20.257: INFO: stderr: "" +Apr 29 19:53:20.257: INFO: stdout: "e2e-test-crd-publish-openapi-4333-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Apr 29 19:53:20.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-935385948 --namespace=crd-publish-openapi-2636 explain e2e-test-crd-publish-openapi-4333-crds' +Apr 29 19:53:20.743: INFO: stderr: "" +Apr 29 19:53:20.743: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4333-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:53:28.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-2636" for this suite. + +• [SLOW TEST:18.564 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":342,"skipped":5949,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:53:28.374: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating a pod to test emptydir 0777 on node default medium +Apr 29 19:53:28.431: INFO: Waiting up to 5m0s for pod "pod-f5eca7ae-6b1e-43f5-a2e3-cba529bfb7b6" in namespace "emptydir-441" to be "Succeeded or Failed" +Apr 29 19:53:28.438: INFO: Pod "pod-f5eca7ae-6b1e-43f5-a2e3-cba529bfb7b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.687224ms +Apr 29 19:53:30.445: INFO: Pod "pod-f5eca7ae-6b1e-43f5-a2e3-cba529bfb7b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013557927s +STEP: Saw pod success +Apr 29 19:53:30.445: INFO: Pod "pod-f5eca7ae-6b1e-43f5-a2e3-cba529bfb7b6" satisfied condition "Succeeded or Failed" +Apr 29 19:53:30.449: INFO: Trying to get logs from node tkg-mgmt-vc-md-0-59d8b7c778-msxpc pod pod-f5eca7ae-6b1e-43f5-a2e3-cba529bfb7b6 container test-container: +STEP: delete the pod +Apr 29 19:53:30.479: INFO: Waiting for pod pod-f5eca7ae-6b1e-43f5-a2e3-cba529bfb7b6 to disappear +Apr 29 19:53:30.486: INFO: Pod pod-f5eca7ae-6b1e-43f5-a2e3-cba529bfb7b6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:53:30.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-441" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":343,"skipped":5988,"failed":0} +SSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:53:30.498: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 +STEP: Creating service test in namespace statefulset-1227 +[It] Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-1227 +STEP: Waiting until pod test-pod will start running in namespace statefulset-1227 +STEP: Creating statefulset with conflicting port in namespace statefulset-1227 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1227 +Apr 29 19:53:34.606: INFO: Observed stateful pod in namespace: statefulset-1227, name: ss-0, uid: ab2b50c5-f26c-4b72-8668-f4b365b951df, status phase: Pending. Waiting for statefulset controller to delete. +Apr 29 19:53:34.628: INFO: Observed stateful pod in namespace: statefulset-1227, name: ss-0, uid: ab2b50c5-f26c-4b72-8668-f4b365b951df, status phase: Failed. Waiting for statefulset controller to delete. +Apr 29 19:53:34.637: INFO: Observed stateful pod in namespace: statefulset-1227, name: ss-0, uid: ab2b50c5-f26c-4b72-8668-f4b365b951df, status phase: Failed. Waiting for statefulset controller to delete. +Apr 29 19:53:34.641: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1227 +STEP: Removing pod with conflicting port in namespace statefulset-1227 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1227 and will be in running state +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 +Apr 29 19:53:38.686: INFO: Deleting all statefulset in ns statefulset-1227 +Apr 29 19:53:38.691: INFO: Scaling statefulset ss to 0 +Apr 29 19:53:48.718: INFO: Waiting for statefulset status.replicas updated to 0 +Apr 29 19:53:48.721: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:53:48.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-1227" for this suite. + +• [SLOW TEST:18.268 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97 + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":344,"skipped":5991,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:53:48.766: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 +STEP: create the container to handle the HTTPGet hook request. +Apr 29 19:53:48.826: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:53:50.831: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: create the pod with lifecycle hook +Apr 29 19:53:50.849: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:53:52.855: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Apr 29 19:53:52.869: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Apr 29 19:53:52.873: INFO: Pod pod-with-prestop-http-hook still exists +Apr 29 19:53:54.875: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Apr 29 19:53:55.094: INFO: Pod pod-with-prestop-http-hook still exists +Apr 29 19:53:56.875: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Apr 29 19:53:56.881: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:53:56.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-3776" for this suite. + +• [SLOW TEST:8.141 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":345,"skipped":6019,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Apr 29 19:53:56.908: INFO: >>> kubeConfig: /tmp/kubeconfig-935385948 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +STEP: Creating the pod +Apr 29 19:53:56.954: INFO: The status of Pod labelsupdate6d534cb3-9958-4934-a0c1-a9972ab78b6b is Pending, waiting for it to be Running (with Ready = true) +Apr 29 19:53:58.959: INFO: The status of Pod labelsupdate6d534cb3-9958-4934-a0c1-a9972ab78b6b is Running (Ready = true) +Apr 29 19:53:59.485: INFO: Successfully updated pod "labelsupdate6d534cb3-9958-4934-a0c1-a9972ab78b6b" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Apr 29 19:54:03.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8632" for this suite. + +• [SLOW TEST:6.624 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":346,"skipped":6073,"failed":0} +SSSSSSSSSSSSSSApr 29 19:54:03.534: INFO: Running AfterSuite actions on all nodes +Apr 29 19:54:03.535: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2 +Apr 29 19:54:03.536: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 +Apr 29 19:54:03.536: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 +Apr 29 19:54:03.537: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Apr 29 19:54:03.537: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Apr 29 19:54:03.541: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Apr 29 19:54:03.542: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +Apr 29 19:54:03.542: INFO: Running AfterSuite actions on node 1 +Apr 29 19:54:03.543: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/sonobuoy/results/junit_01.xml +{"msg":"Test Suite completed","total":346,"completed":346,"skipped":6087,"failed":0} + +Ran 346 of 6433 Specs in 6002.044 seconds +SUCCESS! -- 346 Passed | 0 Failed | 0 Pending | 6087 Skipped +PASS + +Ginkgo ran 1 suite in 1h40m5.316595576s +Test Suite Passed diff --git a/v1.22/vmware-tanzu-kubernetes-grid/junit_01.xml b/v1.22/vmware-tanzu-kubernetes-grid/junit_01.xml new file mode 100644 index 0000000000..49d6ec1629 --- /dev/null +++ b/v1.22/vmware-tanzu-kubernetes-grid/junit_01.xml @@ -0,0 +1,18610 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file