rancher 2.0 创建集群故障

Provisioning [url=https://192.168.102.46/c/c-7s88k]clusterone[/url]Custom1不适用 不适用   [workerPlane] Failed to bring up Worker Plane: Failed to verify healthcheck: Service [kubelet] is not healthy on host [192.168.102.45]. Response code: [403], response body: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) May  8 19:02:52 node-102-45 kubelet: I0508 19:02:52.329686    2396 kuberuntime_manager.go:513] Container {Name:kube-proxy Image:k8s.gcr.io/kube-proxy-amd64:v1.10.0 Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:kube-proxy ReadOnly:false MountPath:/var/lib/kube-proxy SubPath: MountPropagation:} {Name:xtables-lock ReadOnly:false MountPath:/run/xtables.lock SubPath: MountPropagation:} {Name:lib-modules ReadOnly:true MountPath:/lib/modules SubPath: MountPropagation:} {Name:kube-proxy-token-kdzz2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. May  8 19:02:52 node-102-45 kubelet: I0508 19:02:52.329841    2396 kuberuntime_manager.go:757] checking backoff for container "kube-proxy" in pod "kube-proxy-lt8vz_kube-system(ae1687ac-3ebd-11e8-81cf-005056a13744)" May  8 19:02:52 node-102-45 kubelet: I0508 19:02:52.329965    2396 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lt8vz_kube-system(ae1687ac-3ebd-11e8-81cf-005056a13744) May  8 19:02:52 node-102-45 kubelet: E0508 19:02:52.329996    2396 pod_workers.go:186] Error syncing pod ae1687ac-3ebd-11e8-81cf-005056a13744 ("kube-proxy-lt8vz_kube-system(ae1687ac-3ebd-11e8-81cf-005056a13744)"), skipping: failed to "StartContainer" for "kube-proxy" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lt8vz_kube-system(ae1687ac-3ebd-11e8-81cf-005056a13744)" May  8 19:02:52 node-102-45 journal: 2018-05-08 11:02:52.420072 I | mvcc: store.index: compact 372321 May  8 19:02:52 node-102-45 journal: 2018-05-08 11:02:52.421592 I | mvcc: finished scheduled compaction at 372321 (took 894.406µs) May  8 19:02:54 node-102-45 kubelet: I0508 19:02:54.329704    2396 kuberuntime_manager.go:513] Container {Name:kube-flannel Image:quay.io/coreos/flannel:v0.9.1-amd64 Command:[/opt/bin/flanneld --ip-masq --kube-subnet-mgr] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:POD_NAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {Name:POD_NAMESPACE Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:run ReadOnly:false MountPath:/run SubPath: MountPropagation:} {Name:flannel-cfg ReadOnly:false MountPath:/etc/kube-flannel/ SubPath: MountPropagation:} {Name:flannel-token-4k59h ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. May  8 19:02:54 node-102-45 kubelet: I0508 19:02:54.329864    2396 kuberuntime_manager.go:757] checking backoff for container "kube-flannel" in pod "kube-flannel-ds-2h7m6_kube-system(cd7668f8-3ebe-11e8-81cf-005056a13744)" May  8 19:02:54 node-102-45 kubelet: I0508 19:02:54.330022    2396 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-ds-2h7m6_kube-system(cd7668f8-3ebe-11e8-81cf-005056a13744) May  8 19:02:54 node-102-45 kubelet: E0508 19:02:54.330058    2396 pod_workers.go:186] Error syncing pod cd7668f8-3ebe-11e8-81cf-005056a13744 ("kube-flannel-ds-2h7m6_kube-system(cd7668f8-3ebe-11e8-81cf-005056a13744)"), skipping: failed to "StartContainer" for "kube-flannel" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-flannel pod=kube-flannel-ds-2h7m6_kube-system(cd7668f8-3ebe-11e8-81cf-005056a13744)" May  8 19:02:56 node-102-45 kubelet: I0508 19:02:56.329469    2396 kuberuntime_manager.go:513] Container {Name:busybox Image:busybox Command:[] Args:[/bin/sh/] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ktfth ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:true StdinOnce:false TTY:true} is dead, but RestartPolicy says that we should restart it. May  8 19:02:56 node-102-45 kubelet: I0508 19:02:56.329643    2396 kuberuntime_manager.go:757] checking backoff for container "busybox" in pod "busybox-85769dcc67-fmg8r_default(17f28cd7-4db3-11e8-ab9c-005056a13744)" May  8 19:02:56 node-102-45 kubelet: I0508 19:02:56.329753    2396 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=busybox pod=busybox-85769dcc67-fmg8r_default(17f28cd7-4db3-11e8-ab9c-005056a13744) May  8 19:02:56 node-102-45 kubelet: E0508 19:02:56.329794    2396 pod_workers.go:186] Error syncing pod 17f28cd7-4db3-11e8-ab9c-005056a13744 ("busybox-85769dcc67-fmg8r_default(17f28cd7-4db3-11e8-ab9c-005056a13744)"), skipping: failed to "StartContainer" for "busybox" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=busybox pod=busybox-85769dcc67-fmg8r_default(17f28cd7-4db3-11e8-ab9c-005056a13744)" May  8 19:03:06 node-102-45 kubelet: I0508 19:03:06.329467    2396 kuberuntime_manager.go:513] Container {Name:kube-proxy Image:k8s.gcr.io/kube-proxy-amd64:v1.10.0 Command:[/usr/local/bin/kube-proxy --con
已邀请:
已解决!该节点问题!

要回复问题请先登录注册