I've had RabbitMQ 3.2.1 (Erl OTP 16B02 x64) running on Win 2008R2, one day it started to return nodedown error. I decided to reinstall RabbitMQ, I've removed Rabbit and Erlang enviriment, cleaned db folder in RABBITMQ_BASE, removed all erlang cookies and RABBITMQ_NODENAME / PORT variables. Intalled RabbitMQ 3.5.4 Erlang OTP18 x64 as admin....but still I'm not able to manage service via command promt, it gives me following output:
I've already seen some post on this error (Post1 , Post2 ) but, as I see right now all that they are suggesting is reinstall RabbitMQ and to be careful with Erlang cookies, and I've cleaned system completely after uninstalling previous version. Still, any suggestions appreciated.
Thanks.
UPD
Funny thing - I've noticed that db folder in RABBITMQ_BASE is empty, so it is empty in %USERPROFILE%\AppData\Roaming\RabbitMQ... I thought it must create node structure there on service first start...
It's telling you that it is trying to connect to a node with name 'rabbit', and it is telling you that there is a node running with a name of 'RabbitMQ'.
Presumably 'RabbitMQ' is indeed your RabbitMQ node? Perhaps your new installation changed the name of the node, or maybe you were using a non default node name before that has been partially reset? Or maybe something else... Either way, I know you said you cleaned it, but there's a definite mismatch in the node name being used by your server and the rabbitmqctl client.
See RabbitMQ configuration for details on how to check and change your configuration (for UNIX and Windows), or try telling rabbitmqctl to use a different node name (this is -n on UNIX, not sure on Windows).
We (the RabbitMQ team) already saw this behavior, but couldn't reproduce it so far. What we discovered is that for unknown reasons, the Windows service is installed without its parameter, in particular, the node name (rabbit#<hostname>) is missing and Erlang (or Windows, I don't know) picks the service name as the node name (RabbitMQ#<hostname>).
rabbitmqctl fails to contact this node because it expects rabbit#<hostname> by default. But anyway, the node is not working properly.
The workaround we know is to remove and reinstall the Windows service.
Related
I tried to play with LowCardinality setting, I got a message saying that this is an experimental feature and I have to SET allow_experimental_low_cardinality_type = 1 in order to use it.
I executed this command inside clickhouse-client and then I restarted the server. But I got
clickhouse-server.service: Unit entered failed state
Now I am trying to find out how to disable this setting and make my clickhouse-server start again.
Can you help with this please ?
PS: The version I use is the 18.12.17 and I installed it on Linux Ubuntu 16.04
ClickHouse has different layers for settings. If you used SET <setting> = <value> then you set it for current session. You don't need to restart ClickHouse. Please, take a look here.
I suppose you faced with another problem during starting your server. There a bunch of reasons why. So, firstly try to recollect what were done in configs since last restart (because you have just applied changes by restarting server).
Digging into logs also an awesome idea. Don't hesitate to check other similar issues on github.com, for example like this one
I know there are about a thousand answers to various permutations of this question but none of the fifteen or so that I've tried have worked.
I'm running on Mac OS Sierra and using Minikube 0.17.1 as well as kubectl 1.5.3.
We run our own private Docker registry that is insecure as it is not open to the internet. (This is not my choice or in my control so it's not open for discussion). This is my first foray into Kubernetes and actually container orchestration altogether. I also have a very intermediate level of knowledge about Docker in general so I'm drowning in terminology/platform soup here.
When I execute
kubectl run perf-ui --image=X.X.X.X/performance/perf-ui:master
I see
image pull failed for X.X.X.X/performance/perf-ui:master, this may be because there are no credentials on this request. details: (Error response from daemon: Get https://X.X.X.X/v1/_ping: dial tcp X.X.X.X:443: getsockopt: connection refused)
We have an Ubuntu box that accesses the same registry (not using Kubernetes, just Docker) that works just fine. This is likely due to
DOCKER_OPTS="--insecure-registry X.X.X.X"
being in /etc/default/docker.
I made a similar change using the UI of Docker for Mac. I don't know where this change persisted in a config file. After this change a docker pull worked on my laptop!!! Again, this is just using Docker not Kubernetes. The interesting part is I got the same "Connection refused error" (as it tries to access via HTTPS) on my Mac as I get in the Minikube VM and after the change the pull worked. I feel like I'm on to something there.
After sshing into minikube (the VM created my minikube start) using
minikube ssh
I added the following content to /var/lib/boot2docker/profile
export EXTRA_ARGS="$EXTRA_ARGS --insecure-registry 10.129.100.3
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry 10.129.100.3
As you can infer, nothing has worked. I know I've tried other things but they too have failed.
I know this isn't the most comprehensive explanation but I've been digging into this for the past 4 hours.
The bottom line is docker pulls work from our Ubuntu box with the config file setup correctly and from my Mac with the setting configured properly.
How can I enable the same setting in my "Linux 2.6" VM that was created by Minikube?
If someone knows the answer I would be forever grateful.
Thank you in advance!
Thank you to Janos for your alternative solution. I'm confident that is the right choice for some use cases.
It turns out that what I needed was a good night sleep and the following command to start Minikube in the first place:
minikube start --insecure-registry="X.X.X.X"
#intelfx says that adding a port won't be necessary. I'm inclined to believe them but if your registry is on a non-standard port just keep it in mind in case things still aren't working.
In the end it was, in fact, a matter of telling Docker to use an insecure registry but it was not clear how to tell this to Docker when I was not controlling it directly.
I know that seems simple but after you've tried a hundred things you're almost hallucinating so you're not in a great state to make rational decisions. I'm sorry for the dumb post but I'm willing to bet this will help at least one person one day, which makes it worth it.
Thanks SO!
The flag --insecure-registry doesn't work on the existing cluster on MacOS. You need to do minikube delete, it's not enough just to stop the cluster with kubectl stop.
I spent plenty of time to figure this out and then I found this comment at https://github.com/kubernetes/minikube/issues/604:
the --insecure-registry flag is ignored if the
machine already existed (even if it is stopped). You must first
minikube delete if you want new flags to be respected.
You can use kube-registry-proxy from (needs some configuration):
https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml
Then you can refer to localhost:5050 as your registry. The trick is that localhost is allowed as an insecure registry by default.
I am running some erlang code on a Mac OSX, and I have this weird issue. My application is a multi node app where I have a single instance of a server that is shared between nodes (global).
The code works perfectly, except for one annoying thing: the different erlang nodes (I am running each node on a different terminal window) can only communicate with each other after ping!
So if on terminalA I am starting the server, and on terminalB I am running
erl>global:registered_names().
terminalB will return an empty list, unless, before starting the server on terminalA, I have ran a ping (from either one of the terminals).
For example, if I do this on either terminals before starting the server:
erl>net_adm:ping("terminalB").
then I start the server and from the second terminal I list the processes:
erl>global:registered_names().
This time I WILL see the registered process from the second terminal.
Is it possible that the mere net_adm:ping call does some kind of work (like DNS resolving or something like that) that allows the communication?
The nodes in a distributed Erlang system are loosely connected. The
first time the name of another node is used, for example if
spawn(Node,M,F,A) or net_adm:ping(Node) is called, a connection
attempt to that node will be made.
I find this in this link: http://www.erlang.org/doc/reference_manual/distributed.html#id85336
I think you should read this article.
i'm working on qmgr migration from 6.0 to 7.0, but i got problem when restoring V6.0 queue manager from 7.0 on windows. After re-installing MQ 6.0, i copied back the previous backup QMGR data and log, and then tried to start up that QMGR, for instance TEST01. However, that command strmqm TEST01 returns with no such QMGR existed.
The restore procedure i refer to is from infor center below
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp
and i backup and restore MQGR data and log through as below:
Backup
copy C:\Program Files (x86)\IBM\WebSphere MQ\Qmgrs\TEST01 under another path
copy C:\Program Files (x86)\IBM\WebSphere MQ\log\TEST01 under another path
Restore
copy above backup folder back to target path
So according to above operation, did i miss anything or do something wrong?
UPDATE:
This issue has been fixed. I forgot backing up the configuration information from the registry and restored it then. That's why MQ cannot recognize my QMGR at the very beginning.
Additionally, I've got another question here:
how to transfer configuration information from the registry to the mqs.ini file?
You are far better off not to migrate QMgrs but rather to create new ones at the new version. Although IBM has always provided an upgrade path, the implementation of certain functionality differs from version to version. For example, on Windows the registry settings in V6 are no longer used in V7.1 and higher. The requirement to upgrade usually comes from the belief that replacing the QMgr somehow loses something.
In fact, this is rarely the case. There is also nothing special about a QMgr that well-designed client applications would need to know its name. The host, port and channel uniquely identify a QMgr for a client application. If the app specifies the QMgr's name and it does not match, the connection fails. But the app can specify a blank QMgr name and the connection will succeed. The QMgr's name is automatically filled into the Reply-To QMgr field so requests are properly handled. The only thing that needs to know the name is a QRemote (which can be repointed) or a local app using bindings mode connection.
That said, to answer your question just performing the upgrade to V7.1 or V7.5 will move the QMgr's settings to the ini file.
this issue has been fixed. i forgot backing up the configuration information from the registry and restored it then. that's why MQ cannot recognize my QMGR at the very beginning.
I've compiled and trolled around the quickfix ( http://www.quickfixengine.org ) source and the examples. I figured a good starting point would be to compile (C++) and run the 'executor' example, then use the 'tradeclient' example to connect to 'executor', and send it order requests.
I created two seperate session files one for the 'executor' as an acceptor, and one for the 'tradeclient' as the initiator. They're both running on the same Win7 pc.
'executor' runs, but tradeclient can't connect to it, and I can't figure out why. I downloaded Mini-fix and was able to send messages to executor, so I know that executor is working. I figure that the problem is with the tradeclient session settings. I've included both of them below, I was hoping someone could point out what's causing them to not communicate. They're both running on the same computer using port 56156.
--accceptor session.txt----
[DEFAULT]
ConnectionType=acceptor
ReconnectInterval=5
SenderCompID=EXEC
DefaultApplVerID=FIX.5.0
[SESSION]
BeginString=FIXT.1.1
TargetCompID=SENDER
HeartBtInt=5
#SocketConnectPort=
SocketAcceptPort=56156
SocketConnectHost=127.0.0.1
TransportDataDictionary=pathToXml/spec/FIX50.xml
StartTime=07:00:00
EndTime=23:00:00
FileStorePath=store
---- initiator session.txt ---
[DEFAULT]
ConnectionType=initiator
ReconnectInterval=5
SenderCompID=SENDER
DefaultApplVerID=FIX.5.0
[SESSION]
BeginString=FIXT.1.1
TargetCompID=EXEC
HeartBtInt=5
SocketConnectPort=56156
#SocketAcceptPort=56156
SocketConnectHost=127.0.0.1
TransportDataDictionary=pathToXml/spec/FIX50.xml
StartTime=07:00:00
EndTime=23:00:00
FileLogPath=log
FileStorePath=store
--------end------
Update: Thanks for the resonses... Turns out that my logfile directories didn't exist. Once I created them, they both started communicating. Must have been some logging error that didn't throw an exception, but disabled proper behavior.
Is there an error condition that I should be checking? I was relying on exceptions, but that's obviously not enough.
It doesn't seem to be config, check that your message sequence numbers are in synch, especially since you've been connecting to a different server using the same settings.
Try setting the TargetCompID and SenderCompID on the acceptor to *