-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WSREP: wsrep::connect() failed: 7 #4
Comments
I even try to hardcode the ips of the two hosts in gcomm but it still does not work.
|
Finally I hardcoded the node (host2) gcomm to point to the seed host1 192.168.33.101. Doing a nmap on the seed host, I get the following :
I created a rule in haproxy to listen on *:3306 so it makes sense that the mysql port is open, but concerning the 4567 port, marathon/mesos attributes dynamic port in the range 31000/32000 so when the node try to connect to host1 192.168.33.101:4567, it does not find it (if I launch the containers with command line and publishing the ports -p3306:3306 -p 4567:4567 erverything works as expected). In your demo it does not look like you have this kind of problem, did you add manually some others configurations ? is it possible to dynamically configure on which port galera is listening for new node (get the ports attributed by marathon/mesos and listen on those instead of 4567) ? Thanks for the help. |
Well it appears that your galera nodes are connecting directly (between docker containers) thanks to weave. Due to bad performance, I was not using weave, so when the node was trying to connect to the seed at 192.168.33.101:4567, the port was not available on the host. I finally managed to map all ports on host with marathon changing the mesos-slave resources and setting the hostPort in the marathon.json file. One weird thing is that when doing nmap on the seed host I get :
I don't quite get how the node can sync with the seed with sst xtrabackup as it is supposed to use port 4444 which is closed on the seed host... Anyway, the seed is running and healthcheck passes, but when I scale the number of node to 1 (which will deploy on host 2), the healhcheck fails. However when I "show status like 'wsrep%';" on the seed, I get :
Here is the log I get on the node :
As the healthcheck fails, the node keeps being redeployed while it seems it is running fine. Any idea why the node healthcheck could be failing while the seed one is passing ? Thank you |
Hello,
After I launch the cluster through marathon, the seed is running on host 1 (ip 192.168.33.101), I then scale the number of instances of node to 1 (it will try to launch the node on host 2 with ip 192.168.33.102) and I get an error "WSREP: wsrep::connect() failed: 7". Here is the full log :
In the start file, the gcomm is defined by :
The result is always the ip of the host where the galera node is being launched, not the list of all ips of the galera cluster. Do you think that is the culprit ?
In the cas of galera.service.consul, it is QCOMM+="$SEP$(host -t A "$ADDR" | awk '{ print $4 }' | paste -sd ",")" which is used, do you think I have to change this or does it works out of the box for you.
I was trying to get all the galera clusters hosts ip with :
but I am not sure it is the way to go...
Hopefully you have some tips :)
Thank you
The text was updated successfully, but these errors were encountered: