Configure IBM ACE 12 Toolkit to listen to IBM MQ queue and write to one - ibm-mq

I am trying to use ACE Toolkit so that it listens / reads from IBM MQ queue (Docker container, dev version, running locally).
Documentations instructs simply:
"You can use the Security identity property on the MQ node or MQEndpoint policy to pass a user name and password to the queue manager, by specifying a security identity that contains those credentials. The identity is defined using the mqsisetdbparms command."
How do I run "mqsisetdbparms" command, where can I find that command ?
I use Ubuntu Linux (for now).
Alternatively, can I test my ACE Flow so that I run MQ Manager (dev) kind of unsecured way, so that it does not expect user / password ?
Now I am getting error :
2023-01-03 20:57:07.515800: BIP2628W: Exception condition detected on input node 'MQFlow.MQ Input'.
2023-01-03 20:57:07.515866: BIP2678E: Failed to make a server connection to queue manager 'QM1': MQCC=2; MQRC=2058.
.
version: '3.7'
services:
mq-manager:
container_name: mq-manager
build:
context: ./mq
dockerfile: Dockerfile
image: ibm-mq
ports:
- '1414:1414'
- '9443:9443'
environment:
- LICENSE=accept
- MQ_QMGR_NAME=QM1
# - MQ_APP_PASSWORD=passw0rd
.
FROM ibmcom/mq:latest

For local testing, you can configure without usage of mqsisetdbparms like this:
Configure a policy in $YOUR_ACE_WORK_DIR/run/DefaultPolicies/MQ.policyxml:
<policies>
<policy policyType="MQEndpoint" policyName="MQ" policyTemplate="MQEndpoint">
<connection>CLIENT</connection>
<destinationQueueManagerName>QM1</destinationQueueManagerName>
<queueManagerHostname>localhost</queueManagerHostname>
<listenerPortNumber>1414</listenerPortNumber>
<channelName>DEV.ADMIN.SVRCONN</channelName>
<CCDTUrl></CCDTUrl>
<securityIdentity>MqIdentity</securityIdentity>
<useSSL>false</useSSL>
<SSLPeerName></SSLPeerName>
<SSLCipherSpec></SSLCipherSpec>
<SSLCertificateLabel></SSLCertificateLabel>
<MQApplName></MQApplName>
<reconnectOption>default</reconnectOption>
</policy>
</policies>
Configure a remote default queue manager and credentials in $YOUR_ACE_WORK_DIR/overrides/server.conf.yaml:
remoteDefaultQueueManager: '{DefaultPolicies}:MQ'
Credentials:
ServerCredentials:
mq:
MqIdentity:
username: 'admin'
password: 'passw0rd'
Restart your ACE server

Related

Keycloak: Invalid token issuer when running from internal docker container

I'm having some issues with configuring keycloak to run on our server.
Locally it works great but on on our test environment, after login, on any call using the received access token, we get "Invalid token issuer. Expected "http://keycloak:8080/auth/realms/{realmnName}" but was "http://{our-test-server-IP}/auth/realms/{realmName}""
So basically, our backend connects to the internal keycloak docker image but when the request comes it expects that the issuer is the configured external IP so even though the issuers are basically the same service keycloak sees them as being different and responds with a 401.
docker-compose.yml:
keycloak:
image: jboss/keycloak:12.0.4
restart: on-failure
environment:
PROXY_ADDRESS_FORWARDING: "true"
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_LOGLEVEL: DEBUG
KEYCLOAK_IMPORT: /etc/settings/realm.json -Dkeycloak.profile.feature.upload_scripts=enabled
TZ: Europe/Bucharest
DB_VENDOR: POSTGRES
DB_ADDR: db
DB_DATABASE: user
DB_SCHEMA: keycloak
DB_USER: user
DB_PASSWORD: user
ports:
- 8090:8080
volumes:
- ./settings:/etc/settings
depends_on:
- db
Spring application.yml:
keycloak:
cors: true
realm: Realm-Name
resource: back-office
auth-server-url: http://keycloak:8080/auth/
public-client: false
credentials:
secret: 8401b642-0ae9-4dc8-87a6-2f494b388a49
keycloak-client:
id: bcc94ed5-0099-40e0-b460-572eba3f0214
If we change the backend properties auth-server-url to connect to the exposed endpoint and no to the internal docker container we get a timeout, seems like it doesn't want to connect to it. I understand that the main issue is that we are running both the keycloak instance and our backend application on the same server but I don't see why it shouldn't work and why they can not connect to each other.
We tried setting up the FRONTEND_URL in the environment when running the container and in Keycloak admin console but nothing has changed. We've also tried to set forceBackendUrlToFrontendUrl to true in standalone.xml/standalone-ha.xml(./jboss-cli.sh --connect "/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.forceBackendUrlToFrontendUrl, value=true)") files and reset the keycloak instance inside the docker container using ./jboss-cli.sh --connect command=:reload but nothing has changed.
I understand that basically by setting up the FRONTEND_URL all tokens should be signed by the keycloak instance and we would not have this issue but I've tried everything I've found so far on this issue regarding the keycloak configuration and nothing seems to change things. How can I make sure that the issuer that signs the access token and the one that the backend service expects are the same(hopefully the frontend)? And how can I configure this, is there some property I'm missing or was there something I did wrong while configuring it?
Might be related to this answer on here: https://stackoverflow.com/a/64095482/13494285
You could set Host header value to be the expected url.
To override this behavior, you might try to set KEYCLOAK_HOSTNAME environment variable to be the expected url.
Seems like documentation might not be up-to-date (it suggests KEYCLOAK_FRONTEND_URL variable on here), but instead KEYCLOAK_HOSTNAME is used to set fixed provider, as seen on here.
On this context, also the KEYCLOAK_HTTP_PORT is required to set the port to be 8080
Setting the KEYCLOAK_HOSTNAME to the external hostname (as defined in the KEYCLOAK_FRONTEND_URL) definitly worked for my case (eclipse che installation on a vanilla kubernetes cluster)

Dropwizard crashing on Heroku

I am trying to deploy my Dropwizard project to Heroku.
I have added a Procfile and a Postgres DB to the Heroku app.
My Procfile reads:
web: java $JAVA_OPTS -Ddw.server.connector.port=$PORT -Ddw.database.url=$DATABASE_URL -jar target/api-1.0-SNAPSHOT.jar server config.yml
When I try to deploy I receive the following error/crash message in the logs.
org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator: HHH000342: Could not obtain connection to query metadata : Driver:org.postgresql.Driver#53d13cd4 returned null for URL:postgres://fdeqzbddzbefaz:138912590e989b1b8fab5d169a1aea291f04b2d3bc040b1bbf6642a9207a5355#ec2-54-235-101-91.compute-1.amazonaws.com:5432/d67crr4pvqrfee
Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
State changed from starting to crashed
Process exited with status 1
My config.yml reads
database:
# the name of your JDBC driver
driverClass: org.postgresql.Driver
# the username
user: localusername
# the JDBC URL
url: jdbc:postgresql://localhost/dbname
# use the simple server factory if you only want to run on a single port
# HEROKU NOTE - the port gets be overridden with the Heroku $PORT in the Procfile
server:
type: simple
applicationContextPath: /
#adminContextPath: /admin # If you plan to use an admin path, you'll need to also use non-root app path
connector:
type: http
port: 8080
Does anyone have any trouble shooting ideas?
The DATABASE_URL env var is not directly compatible with the JDBC URL format. See docs. Specifically,
The DATABASE_URL for the Heroku Postgres add-on follows the below convention
postgres://username:password#host:port/dbname
However the Postgres JDBC driver uses the following convention:
jdbc:postgresql://host:port/dbname?user=username&password=password
Instead, try using JDBC_DATABASE_URL as documented here

cannot configure HDFS address using gethue/hue docker image

I'm trying to get the Hue docker image from gethue/hue, but it seems to ignore the configuration I give him and always look for HDFS on localhost instead of the docker container I ask him to look for.
Here is some context:
I'm using the following docker compose to launch a HDFS cluster:
hdfs-namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
hostname: namenode
environment:
- CLUSTER_NAME=davidov
ports:
- "8020:8020"
- "50070:50070"
volumes:
- ./data/hdfs/namenode:/hadoop/dfs/name
env_file:
- ./hadoop.env
hdfs-datanode1:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
depends_on:
- hdfs-namenode
links:
- hdfs-namenode:namenode
volumes:
- ./data/hdfs/datanode1:/hadoop/dfs/data
env_file:
- ./hadoop.env
This launches images from BigDataEurope, which are already properly configured, including:
- the activation of webhdfs (in /etc/hadoop/hdfs-site.xml):
- dfs.webhdfs.enabled set to true
- the hue proxy user (in /etc/hadoop/core-site.xml):
- hadoop.proxyuser.hue.hosts set to *
- hadoop.proxyuser.hue.groups set to *
The, I launch hue following their instructions:
First, I launch a bash prompt inside the docker container:
docker run -it -p 8888:8888 gethue/hue:latest bash
Then, I modify desktop/conf/pseudo-distributed.ini to point to the correct hadoop "node" (in my case a docker container with the address 172.30.0.2:
[hadoop]
# Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs
[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://172.30.0.2:8020
# NameNode logical name.
## logical_name=
# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
## webhdfs_url=http://172.30.0.2:50070/webhdfs/v1
# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false
# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True
And then I launch hue using the following command (still inside the hue container):
./build/env/bin/hue runserver_plus 0.0.0.0:8888
I then point my browser to localhost:8888, create a new user ('hdfs' in my case), and launch the HDFS file browser module. I then get the following error message:
Cannot access: /user/hdfs/.
HTTPConnectionPool(host='localhost', port=50070): Max retries exceeded with url: /webhdfs/v1/user/hdfs?op=GETFILESTATUS&user.name=hue&doas=hdfs (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 99] Cannot assign requested address',))
The interesting bit is that it still tries to connect to localhost (which of course cannot work), even though I modified its config file to point to 172.30.0.2.
Googling the issue, I found another config file: desktop/conf.dist/hue.ini. I tried modifying this one and launching hue again, but same result.
Does any one know how I could correctly configure hue in my case?
Thanks in advance for your help.
Regards,
Laurent.
Your one-off docker run command is not on the same network as the docker-compose containers.
You would need something like this, replacing [projectname] with the folder you started docker-compose up in
docker run -ti -p 8888:8888 --network="[projectname]_default" gethue/hue bash
I would suggest using Docker Compose also for the Hue container and volume mount for a INI files under desktop/conf/ that you can specify simply
fs_defaultfs=hdfs://namenode:8020
(since you put hostname: namenode in the compose file)
You'll also need to uncomment the WebHDFS line for your changes to take affect
All INI files are merged in the conf folder for Hue

MongoDB: Server has startup warnings [duplicate]

This question already has answers here:
MongoDB: Server has startup warnings ''Access control is not enabled for the database''
(4 answers)
Closed 2 years ago.
I firstly installed MongoDB 3.2.5 today. But when I start it and use MongoDB shell, it gave me these warnings below:
C:\Windows\system32>mongo
MongoDB shell version: 3.2.5
connecting to: test
Server has startup warnings:
2016-04-16T11:06:17.943+0800 I CONTROL [initandlisten]
2016-04-16T11:06:17.943+0800 I CONTROL [initandlisten] ** WARNING: Insecure configuration, access control is not enabled and no --bind_ip has been specified.
2016-04-16T11:06:17.943+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted,
2016-04-16T11:06:17.943+0800 I CONTROL [initandlisten] ** and the server listens on all available network interfaces.
2016-04-16T11:06:17.943+0800 I CONTROL [initandlisten]
>
my OS is Microsoft Windows [version 10.0.10586].
You haven't configure the security features in Mongodb like authorization and authentication. Use this link for more details. You can ignore this if you are going to learn Mongodb. But when the product is going to production level. you should concern them.
You can enable access control by using mongod --auth.
For example you can run mongod --auth --port 27017 --dbpath /data/db1. After that you can secure your database with username and password.
you can add user in database using following command.
use admin
db.auth("myUserAdmin", "abc123" )
After that you can use mongo --port 27017 -u "myUserAdmin" -p "abc123" --authenticationDatabase "admin" to connect to the database.
You can add bind_ip in mongod.conf as follows,
`bind_ip = 127.0.0.1,192.168.161.100`
You can define many if you need. This bind_ip option tells MongoDB to accept connections from which local network interfaces, not which “remote IP address”.
And run mongod --config <file path to your mongod.conf>
Altogether you can run mongod --auth --port 27017 --dbpath /data/db1 --config <file path to your mongod.conf>
Run mongod --auth to enable access control. Detailed information can be found here.
Select the target DB (Exp : use admin)
Create user in the selected DB
Select the required DB (exp use admin)
db.createUser(
{
user: "root",
pwd: "root",
roles: [ "readWrite", "dbAdmin" ]
}
)
The above command will create the root user with roles readWrite and dbAdmin in the admin DB. more info about roles
Now, run the server in authentication mode using mongod --auth
Run client and provide username and password to login using db.auth("root","root")

Jenkins on Windows - Could not connect to SMTP host, "Unrecognized SSL message"

How do I enable STARTTLS for Jenkins running on Windows?
I have Jenkins running on a Windows 2008 server, and my email notifications are configured with the following info:
Host: smtp.office365.com
Port: 587
SMTP Auth: True
SSL: True
etc...
When I run a test, I get the following exception message:
javax.mail.MessagingException: Could not connect to SMTP host: smtp.office365.com, port: 587;
nested exception is:
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
The issue seems to be due to smtp.office365.com using STARTTLS for the connection security. I have tried to enable STARTTLS through the jenkins.xml config file, by adding the following argument:
-Dmail.smtp.starttls.enable=true
Is this correct switch/parameter?
Is jenkins.xml the correct file to update?
Note: I am aware, some people have solved this in their Linux environment, but I am looking for a solution that's specific to Windows. Below is a snippet of my current jenkins.xml file:
<service>
<id>jenkins</id>
<name>Jenkins</name>
<description>This service runs Jenkins continuous integration system.</description>
<env name="JENKINS_HOME" value="%BASE%"/>
<!--
if you'd like to run Jenkins with a specific version of Java, specify a full path to java.exe.
The following value assumes that you have java in your PATH.
-->
<executable>%BASE%\jre\bin\java</executable>
<arguments>-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=8080 -Dmail.smtp.starttls.enable=true</arguments>
<!--
interactive flag causes the empty black Java window to be displayed.
I'm still debugging this.
<interactive />
-->
<logmode>rotate</logmode>
<onfailure action="restart" />
</service>
I think the argument order DOES matter. I had to put the -Dmail.smtp.starttls.enable=true before the -jar argument.
-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -Dmail.smtp.starttls.enable=true -jar "%BASE%\jenkins.war" --httpPort=8080

Resources