Docker image for Spring/RabbitMQ tutorial results in connection refused - spring-boot

I'm working through the Spring tutorial here;
Messaging with RabbitMQ
I found this question but it did not address my query regarding the docker-compose.yml file found in the tutorial;
Spring RabbitMQ tutorial results in Connection Refused error
I've completed all necessary steps up until the actual running of the application, at which point I'm getting ConnectException exceptions suggesting that the server is not running or not running correctly.
The docker-compose.yml file specified in the tutorial is as follows;
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
Basically I am unsure what this docker-compose file actually does, because it doesn't seem to set up the RabbitMQ server as the tutorial suggests (or at least not in the way the tutorial expects). I'm quite new to Docker also so perhaps I am mistaken in thinking this file would run a new instance of the RabbitMQ server.
When I run docker-compose up I get the following console output;
rabbitmq_1 |
rabbitmq_1 | =INFO REPORT==== 28-Jun-2017::13:27:26 ===
rabbitmq_1 | Starting RabbitMQ 3.6.10 on Erlang 20.0-rc2
rabbitmq_1 | Copyright (C) 2007-2017 Pivotal Software, Inc.
rabbitmq_1 | Licensed under the MPL. See http://www.rabbitmq.com/
rabbitmq_1 |
rabbitmq_1 | RabbitMQ 3.6.10. Copyright (C) 2007-2017 Pivotal Software, Inc.
rabbitmq_1 | ## ## Licensed under the MPL. See http://www.rabbitmq.com/
rabbitmq_1 | ## ##
rabbitmq_1 | ########## Logs: tty
rabbitmq_1 | ###### ## tty
rabbitmq_1 | ##########
rabbitmq_1 | Starting broker...
rabbitmq_1 |
rabbitmq_1 | =INFO REPORT==== 28-Jun-2017::13:27:26 ===
rabbitmq_1 | node : rabbit#bd20dc3d3d2a
rabbitmq_1 | home dir : /var/lib/rabbitmq
rabbitmq_1 | config file(s) : /etc/rabbitmq/rabbitmq.config
rabbitmq_1 | cookie hash : DTVsmjdKvD5KtH0o/OLVJA==
rabbitmq_1 | log : tty
rabbitmq_1 | sasl log : tty
rabbitmq_1 | database dir : /var/lib/rabbitmq/mnesia/rabbit#bd20dc3d3d2a
...plus a load of INFO reports. This led me to believe that the RabbitMQ server was up and running, but apparently not as I cannot connect.
The only way I have gotten this to work is by manually installing Erlang and RabbitMQ (on a Windows system here) which does appear to let me complete the tutorial.
Why is Docker even mentioned in this tutorial though? The docker-compose.yml does not appear to do what the tutorial suggests.
What is this file actually doing here and how would one run RabbitMQ in a docker container for the purposes of this tutorial? Is this an issue with port numbers?

It turns out the issue was with the Spring RabbitMQ template connection information.
The Spring tutorial assumes the use of the normal, manual installation of RabbitMQ (plus Erlang) and the RabbitMQ Spring template uses some default connection parameters that are not compatible with the image in docker-compose file specified in the tutorial.
To solve this I needed to add an Spring application.properties file and add it to the resources folder in my application directory structure. Next I needed to find the IP address of my Docker container using the following command;
docker-machine ip
which will gives the IP address. I added the following parameters to the application.properties file;
spring.rabbitmq.host={docker-machine ip address}
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
The port, username and password here are all defaults and can be found in the RabbitMQ documentation.
Doing this I was able to have my application connect correctly to the RabbitMQ server running in the Docker container.
It appears the Spring tutorial is slightly incomplete as it does not inform the reader that some extra steps are required when using the RabbitMQ docker-compose file over the manual installation of RabbitMQ that the rest of the tutorial assumes.

From what I know, it is not possible to know all the time the IP address and you should instead of the ip address, provide the DNS which is the name of the rabbitmq server defined in your docker-compose file.

Related

Configure IBM ACE 12 Toolkit to listen to IBM MQ queue and write to one

I am trying to use ACE Toolkit so that it listens / reads from IBM MQ queue (Docker container, dev version, running locally).
Documentations instructs simply:
"You can use the Security identity property on the MQ node or MQEndpoint policy to pass a user name and password to the queue manager, by specifying a security identity that contains those credentials. The identity is defined using the mqsisetdbparms command."
How do I run "mqsisetdbparms" command, where can I find that command ?
I use Ubuntu Linux (for now).
Alternatively, can I test my ACE Flow so that I run MQ Manager (dev) kind of unsecured way, so that it does not expect user / password ?
Now I am getting error :
2023-01-03 20:57:07.515800: BIP2628W: Exception condition detected on input node 'MQFlow.MQ Input'.
2023-01-03 20:57:07.515866: BIP2678E: Failed to make a server connection to queue manager 'QM1': MQCC=2; MQRC=2058.
.
version: '3.7'
services:
mq-manager:
container_name: mq-manager
build:
context: ./mq
dockerfile: Dockerfile
image: ibm-mq
ports:
- '1414:1414'
- '9443:9443'
environment:
- LICENSE=accept
- MQ_QMGR_NAME=QM1
# - MQ_APP_PASSWORD=passw0rd
.
FROM ibmcom/mq:latest
For local testing, you can configure without usage of mqsisetdbparms like this:
Configure a policy in $YOUR_ACE_WORK_DIR/run/DefaultPolicies/MQ.policyxml:
<policies>
<policy policyType="MQEndpoint" policyName="MQ" policyTemplate="MQEndpoint">
<connection>CLIENT</connection>
<destinationQueueManagerName>QM1</destinationQueueManagerName>
<queueManagerHostname>localhost</queueManagerHostname>
<listenerPortNumber>1414</listenerPortNumber>
<channelName>DEV.ADMIN.SVRCONN</channelName>
<CCDTUrl></CCDTUrl>
<securityIdentity>MqIdentity</securityIdentity>
<useSSL>false</useSSL>
<SSLPeerName></SSLPeerName>
<SSLCipherSpec></SSLCipherSpec>
<SSLCertificateLabel></SSLCertificateLabel>
<MQApplName></MQApplName>
<reconnectOption>default</reconnectOption>
</policy>
</policies>
Configure a remote default queue manager and credentials in $YOUR_ACE_WORK_DIR/overrides/server.conf.yaml:
remoteDefaultQueueManager: '{DefaultPolicies}:MQ'
Credentials:
ServerCredentials:
mq:
MqIdentity:
username: 'admin'
password: 'passw0rd'
Restart your ACE server

How to check if Cloud Pub/Sub emulator is up and running?

I have GC functions which I develop and test locally by using Cloud Pub/Sub emulator.
I want to be able to check from within Go code if Cloud Pub/Sub emulator is up and running. If not, I would like to inform a developer that he/she should start emulator before he/she execute code locally.
When the emulator starts I noticed a line
INFO: Server started, listening on 8085
Maybe I can check if port is available or similar.
I guess you have used this command:
gcloud beta emulators pubsub start
And you got the following output:
[pubsub] This is the Google Pub/Sub fake.
[pubsub] Implementation may be incomplete or differ from the real system.
[pubsub]
[pubsub] INFO: IAM integration is disabled. IAM policy methods and ACL checks are not supported
[pubsub]
[pubsub] INFO: Applied Java 7 long hostname workaround.
[pubsub]
[pubsub] INFO: Server started, listening on 8085
If you take a look at the second INFO message you'll notice that the process name will be JAVA. Now you can run this command:
sudo lsof -i -P -n
Getting all the listening ports and applications, the output should be something like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
XXXX
XXXX
java XXX XXX XX IPv4 XXX 0t0 TCP 127.0.0.1:8085 (LISTEN)
Alternatively you can modify the previous command to show only what is happening on the desired port:
sudo lsof -i -P -n | grep 8085

How to get the API to test locally the server after docker-compose up?

I want to test the server for AJAX request before pushing my code. I am running a docker server using the command docker-compose -f docker-compose-withdb.yml up. But I don't know what's the API I have to make requests to for testing it locally.
This is the output snippet I am getting after docker-compose up:
taskmaster_1 |
taskmaster_1 | Waiting for rabbitmq:5672 to become available ... done
taskmaster_1 | rm: could not remove directory (code EBUSY): /tmp/runbox
rabbitmq_1 | 2019-05-22 08:02:48.857 [info] <0.657.0> accepting AMQP connection <0.657.0> (172.19.0.9:49598 -> 172.19.0.3:5672)```
I have tried making requests to localhost:5672 and 172.19.0.9:49598.
You should map your api ports to Host. check docker-compose to see if there is a line like this:
ports:
- 80:8080
which the first one is host port and the second one is container port.

Can't connect to Neo

I've installed Docker on OSX and downloaded the neo image. when I run it (using the args in the home page of the image), everything seems to work, but the last lines of the log indicate something like:
00:20:39.662 [main] INFO org.eclipse.jetty.server.Server - Started
#4761ms 2015-10-05 00:20:39.663+0000 INFO [API] Server started on:
http://022b5f3a38fc:7474/ 2015-10-05 00:20:39.663+0000 INFO [API]
Remote interface ready and available at [http://022b5f3a38fc:7474/]
which seem odd and attempting to connect my browser to either http://localhost:7474/ or the indicated http://022b5f3a38fc:7474/ results in an error
what am I missing here?
You'll want to use the IP address of the docker VM, which you can determine with this command:
docker-machine inspect default | grep IPAddress
The default IP address is 192.168.99.100
So depending on which port you exposed when running the Neo4j docker container you can access the Neo4j browser at:
http://192.168.99.100:7474
or
http://192.168.99.100:8474
Port 8474 is the the binding specified by this command:
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
which is the example given in the documentation here

Can't access to cloud9 remotly

I have installed Cloud 9 IDE on my server, but I can't access to it remotely.
info - socket.io started
Project root is: .
Trying to start your browser in: http://127.0.0.1:3000
Does Cloud 9 work only on local?
Yes, by default the Cloud9 webserver listens only to the localhost interface, to specify another address, just add the following argument to the command line: -l 0.0.0.0, e.g.
./cloud9.sh -l 0.0.0.0 -w ~/workspace

Resources