We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker 18.09 kubeadm 1.13
kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.10.0.0/16 --apiserver-advertise-address=192.168.55.31 git clone https://github.com/contiv/netplugin cd netplugin/install/k8s/contiv ./contiv-compose use-release --k8s-api https://192.168.55.31:6443 -v $(cat ../../../version/CURRENT_VERSION) ./contiv-base.yaml > ./contiv.yaml kubectl apply -f contiv.yaml
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system contiv-netmaster-vjsvd 0/1 Pending 0 9m32s kube-system coredns-86c58d9df4-hzt6d 0/1 Pending 0 55m kube-system coredns-86c58d9df4-zwn9d 0/1 Pending 0 55m kube-system etcd-kubecontiv1 1/1 Running 0 55m kube-system kube-apiserver-kubecontiv1 1/1 Running 0 55m kube-system kube-controller-manager-kubecontiv1 1/1 Running 0 55m kube-system kube-proxy-f79dv 1/1 Running 0 55m kube-system kube-scheduler-kubecontiv1 1/1 Running 0 55m
Name: contiv-netmaster-vjsvd Namespace: kube-system Priority: 0 PriorityClassName: Node: Labels: k8s-app=contiv-netmaster Annotations: scheduler.alpha.kubernetes.io/critical-pod: Status: Pending IP: Controlled By: ReplicaSet/contiv-netmaster Init Containers: contiv-netplugin-init: Image: contiv/netplugin-init:latest Port: Host Port: Environment: CONTIV_ROLE: netmaster CONTIV_MODE: <set to the key 'contiv_mode' of config map 'contiv-config'> Optional: false CONTIV_K8S_CONFIG: <set to the key 'contiv_k8s_config' of config map 'contiv-config'> Optional: false CONTIV_CNI_CONFIG: <set to the key 'contiv_cni_config' of config map 'contiv-config'> Optional: false Mounts: /var/contiv from var-contiv (rw) /var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-ts9pl (ro) contiv-netctl: Image: contiv/netplugin:latest Port: Host Port: Command: cp /contiv/bin/netctl /usr/local/sbin/netctl Environment: Mounts: /usr/local/sbin/ from usr-local-sbin (rw) /var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-ts9pl (ro) Containers: contiv-netmaster: Image: contiv/netplugin:latest Port: Host Port: Environment: CONTIV_ROLE: netmaster CONTIV_NETMASTER_MODE: <set to the key 'contiv_mode' of config map 'contiv-config'> Optional: false CONTIV_NETMASTER_ETCD_ENDPOINTS: <set to the key 'contiv_etcd' of config map 'contiv-config'> Optional: false CONTIV_NETMASTER_FORWARD_MODE: <set to the key 'contiv_fwdmode' of config map 'contiv-config'> Optional: false CONTIV_NETMASTER_NET_MODE: <set to the key 'contiv_netmode' of config map 'contiv-config'> Optional: false Mounts: /var/contiv from var-contiv (rw) /var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-ts9pl (ro) Conditions: Type Status PodScheduled False Volumes: var-contiv: Type: HostPath (bare host directory volume) Path: /var/contiv HostPathType: usr-local-sbin: Type: HostPath (bare host directory volume) Path: /usr/local/sbin/ HostPathType: contiv-netmaster-token-ts9pl: Type: Secret (a volume populated by a Secret) SecretName: contiv-netmaster-token-ts9pl Optional: false QoS Class: BestEffort Node-Selectors: node-role.kubernetes.io/master= Tolerations: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message
Warning FailedScheduling 2m6s (x64 over 12m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
How to solve this problem? Thanks!
The text was updated successfully, but these errors were encountered:
this document may help you https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
Sorry, something went wrong.
I'm hitting this too, @amwork2010 did you figure out a solution?
No branches or pull requests
docker 18.09
kubeadm 1.13
kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.10.0.0/16 --apiserver-advertise-address=192.168.55.31
git clone https://github.com/contiv/netplugin
cd netplugin/install/k8s/contiv
./contiv-compose use-release --k8s-api https://192.168.55.31:6443 -v $(cat ../../../version/CURRENT_VERSION) ./contiv-base.yaml > ./contiv.yaml
kubectl apply -f contiv.yaml
kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system contiv-netmaster-vjsvd 0/1 Pending 0 9m32s
kube-system coredns-86c58d9df4-hzt6d 0/1 Pending 0 55m
kube-system coredns-86c58d9df4-zwn9d 0/1 Pending 0 55m
kube-system etcd-kubecontiv1 1/1 Running 0 55m
kube-system kube-apiserver-kubecontiv1 1/1 Running 0 55m
kube-system kube-controller-manager-kubecontiv1 1/1 Running 0 55m
kube-system kube-proxy-f79dv 1/1 Running 0 55m
kube-system kube-scheduler-kubecontiv1 1/1 Running 0 55m
kubectl describe pod contiv-netmaster-vjsvd -n kube-system
Name: contiv-netmaster-vjsvd
Namespace: kube-system
Priority: 0
PriorityClassName:
Node:
Labels: k8s-app=contiv-netmaster
Annotations: scheduler.alpha.kubernetes.io/critical-pod:
Status: Pending
IP:
Controlled By: ReplicaSet/contiv-netmaster
Init Containers:
contiv-netplugin-init:
Image: contiv/netplugin-init:latest
Port:
Host Port:
Environment:
CONTIV_ROLE: netmaster
CONTIV_MODE: <set to the key 'contiv_mode' of config map 'contiv-config'> Optional: false
CONTIV_K8S_CONFIG: <set to the key 'contiv_k8s_config' of config map 'contiv-config'> Optional: false
CONTIV_CNI_CONFIG: <set to the key 'contiv_cni_config' of config map 'contiv-config'> Optional: false
Mounts:
/var/contiv from var-contiv (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-ts9pl (ro)
contiv-netctl:
Image: contiv/netplugin:latest
Port:
Host Port:
Command:
cp
/contiv/bin/netctl
/usr/local/sbin/netctl
Environment:
Mounts:
/usr/local/sbin/ from usr-local-sbin (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-ts9pl (ro)
Containers:
contiv-netmaster:
Image: contiv/netplugin:latest
Port:
Host Port:
Environment:
CONTIV_ROLE: netmaster
CONTIV_NETMASTER_MODE: <set to the key 'contiv_mode' of config map 'contiv-config'> Optional: false
CONTIV_NETMASTER_ETCD_ENDPOINTS: <set to the key 'contiv_etcd' of config map 'contiv-config'> Optional: false
CONTIV_NETMASTER_FORWARD_MODE: <set to the key 'contiv_fwdmode' of config map 'contiv-config'> Optional: false
CONTIV_NETMASTER_NET_MODE: <set to the key 'contiv_netmode' of config map 'contiv-config'> Optional: false
Mounts:
/var/contiv from var-contiv (rw)
/var/run/secrets/kubernetes.io/serviceaccount from contiv-netmaster-token-ts9pl (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
var-contiv:
Type: HostPath (bare host directory volume)
Path: /var/contiv
HostPathType:
usr-local-sbin:
Type: HostPath (bare host directory volume)
Path: /usr/local/sbin/
HostPathType:
contiv-netmaster-token-ts9pl:
Type: Secret (a volume populated by a Secret)
SecretName: contiv-netmaster-token-ts9pl
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/master=
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Warning FailedScheduling 2m6s (x64 over 12m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
How to solve this problem?
Thanks!
The text was updated successfully, but these errors were encountered: