Using pact broker with a path - maven

I am trying to get my pact broker working on my environment. I have the broker running in K8S under https://mydomain/pactbroker (image; dius/pactbroker).
I am able to send to the broker with the maven plugin (publish). However when I try to verify I get an error; Request to path '/' failed with response 'HTTP/1.1 401 Unauthorized'
Can someone help me out?
<build>
<plugins>
<plugin>
<groupId>au.com.dius</groupId>
<artifactId>pact-jvm-provider-maven</artifactId>
<version>4.0.10</version>
<configuration>
<serviceProviders>
<!-- You can define as many as you need, but each must have a unique name -->
<serviceProvider>
<name>FaqService</name>
<protocol>http</protocol>
<host>localhost</host>
<port>8080</port>
<pactBroker>
<url>https://mydomain/pactbroker/</url>
<authentication>
<scheme>basic</scheme>
<username>user</username>
<password>pass</password>
</authentication>
</pactBroker>
</serviceProvider>
</serviceProviders>
</configuration>
</plugin>
</plugins>
</build>
Added information (Jun 18, 12:52 CET):
When trying to go through the logs it seems it tries to fetch the HAL root information via path "/". However responds with;
[WARNING] Could not fetch the root HAL document
When I enable PreEmptive Authentication i can see that ot give a Warning like
[WARNING] Using preemptive basic authentication with the pact broker at https://mydomain so without the path.

Have you confirmed you can use the broker correctly outside of Maven?
e.g. can you curl --user user:pass https://mydomain/pactbroker/ and get back an API result? Can you visit it in the browser?
You may also need to make sure all relative links etc. work. See https://docs.pact.io/pact_broker/configuration#running-the-broker-behind-a-reverse-proxy and docs for whatever proxy you have in front of it.

The issue was with pact. An issue was raised and should be merged to the next release soon (4.1.4)

Related

Spinnaker & Okta integration failing

Scenerio:
Upgraded Spinnaker to 1.12.0. No other config changes that would impact this integration (we had to modify an s3 IAM because it quit working). Okta integration stopped working. Public key was reissued during install process for the ingress, may be relevant?
SAML-TRACE shows payload getting to okta and back
Spinnaker throws two different errors depending on browser and how I get there.
Direct link to deck url: (500) No IDP was configured, please update included metadata with at least one IDP (seen in browser and gate)
Okta "chicklet" in okta dashboard: (401) Authentication Failed: Incoming SAML message is invalid
Config details (again none of this changed):
Downloading metadata directly
JKS is being leveraged and is valid
service url is confirmed
alias for JKS is confirmed
I had this issue as well when upgrading from 1.10.13 to 1.12.2. I found lots of these error messages in Gate's logs:
2019-02-19 05:31:30.421 ERROR 1 --- [.0-8084-exec-10] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw e
xception [org.opensaml.saml2.metadata.provider.MetadataProviderException: No IDP was configured, please update included metadata with at least one IDP] with root cause
org.opensaml.saml2.metadata.provider.MetadataProviderException: No IDP was configured, please update included metadata with at least one IDP
at org.springframework.security.saml.metadata.MetadataManager.getDefaultIDP(MetadataManager.java:795) ~[spring-security-saml2-core-1.0.2.RELEASE.jar:1.0.2.RELEASE]
at org.springframework.security.saml.context.SAMLContextProviderImpl.populatePeerEntityId(SAMLContextProviderImpl.java:157) ~[spring-security-saml2-core-1.0.2.RELEASE.jar
:1.0.2.RELEASE]
at org.springframework.security.saml.context.SAMLContextProviderImpl.getLocalAndPeerEntity(SAMLContextProviderImpl.java:127) ~[spring-security-saml2-core-1.0.2.RELEASE.ja
r:1.0.2.RELEASE]
at org.springframework.security.saml.SAMLEntryPoint.commence(SAMLEntryPoint.java:146) ~[spring-security-saml2-core-1.0.2.RELEASE.jar:1.0.2.RELEASE]
at org.springframework.security.web.access.ExceptionTranslationFilter.sendStartAuthentication(ExceptionTranslationFilter.java:203) ~[spring-security-web-4.2.9.RELEASE.jar
:4.2.9.RELEASE]
...
After downgrading back to 1.10.13, I upgraded to the next version, 1.11.0, and found that's when the issue started. Eventually, I looked at Gate's logs from the launch of the Container and found:
2019-02-20 22:31:40.132 ERROR 1 --- [0.0-8084-exec-3] o.o.s.m.provider.HTTPMetadataProvider : Error retrieving metadata from https://000000000000.okta.com/app/00000000000000000/sso/saml/metadata
javax.net.ssl.SSLException: Error in hostname verification
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.verifyHostname(TLSProtocolSocketFactory.java:241) ~[openws-1.5.4.jar:na]
at org.opensaml.ws.soap.client.http.TLSProtocolSocketFactory.createSocket(TLSProtocolSocketFactory.java:186) ~[openws-1.5.4.jar:na]
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) ~[commons-httpclient-3.1.jar:na]
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387) ~[commons-httpclient-3.1.jar:na]
...
This lead me to realize that the TLS Certificate was being rejected by Gate. Not sure why it suddenly started failing the check. Up to this point, I had it configured as:
$ hal config security authn saml edit --metadata https://000000000000.okta.com/app/00000000000000000/sso/saml/metadata
I ended up downloading the metadata file and redeploying with halyard.
$ wget https://000000000000.okta.com/app/00000000000000000/sso/saml/metadata
$ hal config security authn saml edit --metadata "${PWD}/metadata"
$ hal config version edit --version 1.12.2
$ hal deploy apply
Opened up a private browser window as suggested by the Spinnaker documentation and Gate started redirecting to Okta correctly again.
Issue filed, https://github.com/spinnaker/spinnaker/issues/4017.
So I ended up finding the answer. The tomcat config changed apparently in spinnaker in later versions for gate.
I created this snippet in ~/.hal/default/profiles/gate-local.yml
server:
tomcat:
protocolHeader: X-Forwarded-Proto
remoteIpHeader: X-Forwarded-For
internalProxies: .*
Deployed spinnaker and it was back to working.

Migration with Liberty server- how to configure mail session in Liberty

Please refer below code snippet in liberty server.xml.
<library id="objectFactoryLib">
<fileset dir="" includes="naming-factory-5.5.15.jar"/>
<fileset dir="" includes="javax.mail-1.5.5.jar"/>
</library>
<jndiObjectFactory className="org.apache.naming.factory.MailSessionFactory" id="mailSessionFactory" libraryRef="objectFactoryLib" objectClassName="javax.mail.Session"/>
Getting the below error while deploying application in liberty server version 8.5.5.7.
Cannot convert value of type [javax.mail.Session] to required type [javax.mail.Session] for property 'session': no matching editors or conversion strategy found.
I have already removed the mail jar in other places and kept only in ear/web-inf/lib folder alone.but even getting the same error.
Can anyone tel how to configure mail session in Liberty .
Liberty 8.5.5.7 version not supporting java Mail 1.5 feature.
Note : My Application already running successfully in Tomcat /WAS .
You should use <feature>javaMail-1.5</feature> in your server.xml and then configure session something like this:
<mailSession
description="Mail session for testing"
from="Liberty2#itso.ibm.com"
host="mailHost"
jndiName="mail/itsoMailSession"
mailSessionID="itsoMailSession"
user="validUser#account.com"
password="password"/>
In application you request it like this:
#Resource(lookup="mail/itsoMailSession")
Session mailSession;
Check the following resources:
WebSphere Liberty Redbook
How to write an application using JavaMail
Knowledge Center

SoapUI steps overwrites log4j settings

I'm having a setup where I run JBehave tests during a Maven build.
Test steps include sending requests to a web service with the SoapUI Java classes.
Everything is working fine, testwise. My problem is that the SoapUI part of the progress seems to overwrite the log4j settings so that subsequent log calls doesn't get printed to console (nor files).
I've tried the workaround where I call
ClassLoader loader = this.getClass().getClassLoader();
URL resource = loader.getResource("log4j.xml");
PropertyConfigurator.configure(resource);
to try to reset the configuration to my original one, but no success so far.
Log4j (1.2) and SoapUI (4.5.1) uses plain settings in pom. The logger is created as
protected final Log log = LogFactory.getLog(getClass());
The console output I get follows:
pool-1-thread-1 16:36:08,212 DEBUG ästeps.LoginSteps:25 - logging in user: testfir
pool-1-thread-1 16:36:08,213 DEBUG äpages.LoginPage:26 - Create LoginPage
pool-1-thread-1 16:36:08,985 DEBUG äpages.LoginPage:38 - login user: testfir
pool-1-thread-1 16:36:10,343 DEBUG äpages.WorkspacePage:36 - creating WorkspacePage
Givet user testfir has logged in
16:36:11,634 WARN [SoapUI] Missing folder [D:\proj\src\test\functional-tests\.\ext] for external libraries
16:36:11,809 INFO [DefaultSoapUICore] initialized soapui-settings from [C:\Users\xxx\soapui-settings.xml]
16:36:12,176 INFO [WsdlProject] Loaded project from [file:/D:/proj/src/test/functional-tests/src/test/resources/ReceiveCase-soapui.xml]
16:36:12,640 DEBUG [HttpClientSupport$SoapUIHttpClient] Attempt 1 to execute request
16:36:12,640 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Sending request: POST /soa-infra/services/default/ReceiveCases/ReceiveCase_v1_0_ep HTTP/1.1
16:36:13,841 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Receiving response: HTTP/1.1 200 OK
16:36:13,842 DEBUG [HttpClientSupport$SoapUIHttpClient] Connection can be kept alive indefinitely
And a case exists
When case is choosen
16:36:46,832 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Connection closed
Then the details are displyed
And I'm expecting a log output with
Setting case Id to: 123456
in the same manner as "Create login page".
Can't understand why this is and what to do to get my log entries to show up. Any ideas out there?
Best regards, Christian
Managed to find the root of the problem.
It was Maven that distorted the file encoding. Adding
<configuration>
<encoding>UTF-8</encoding>
<inputEncoding>UTF-8</inputEncoding>
<outputEncoding>UTF-8</outputEncoding>
<argLine>-Dfile.encoding=UTF-8</argLine>
</configuration>
to the maven-surefire-plugin part in pom file solved my issue.
/Cheers

Deploy a maven site into Alfresco through FTP

I'm experiencing some issues at the moment, when deploying maven site into Alfresco.
In my company, we use Alfresco as ECM, in our forge.
Since this tool supports FTP, and index all content of any kind of text document, I'd like to push my maven site into.
But even I'm able to deploy site manually through FTP on Alfresco, or upload it automatically using maven, I'm not able to combine both :
Here my part pom.xml
<distributionManagement>
[...]
<site>
<id>forge-alfresco</id>
<name>Serveur Alfresco de la Forge</name>
<url>ftp://alfresco.mycompany.corp/Alfresco/doc/site</url>
</site>
</distributionManagement>
<build>
<extensions>
<!-- Enabling the use of FTP -->
<extension>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-ftp</artifactId>
<version>2.2</version>
</extension>
</extensions>
</build>
And here, part of my settings.xml
<servers>
<server>
<id>forge-alfresco</id>
<username>jrrevy</username>
<password>xxxxxxxx</password>
</server>
</servers>
When I try to deploy using site:deploy, I facing to this :
[INFO] [site:deploy {execution: default-cli}]
Reply received: 220 FTP server ready
Command sent: USER jrrevy
Reply received: 331 User name okay, need password for jrrevy
Command sent: PASS xxxxxx
Reply received: 230 User logged in, proceed
Command sent: SYST
Reply received: 215 UNIX Type: Java FTP Server
Remote system is UNIX Type: Java FTP Server
Command sent: TYPE I
Reply received: 200 Command OK
ftp://alfresco.mycompany.corp/Alfresco/doc/site/ - Session: Opened
[INFO] Pushing D:\project\workspaces\yyyyy\myproject\target\site
[INFO] >>> to ftp://alfresco.mycompany.corp/Alfresco/doc/site/./
Command sent: CWD /Alfresco/doc/site/
Reply received: 250 Requested file action OK
Recursively uploading directory D:\project\workspaces\yyyyy\myproject\target\site as ./
processing = D:\project\workspaces\yyyyy\myproject\target\site as ./
Command sent: CWD ./
Reply received: 550 Invalid path ./
Command sent: MKD ./
Reply received: 250 /Alfresco/doc/site/.
Command sent: CWD ./
Reply received: 550 Invalid path ./
ftp://alfresco.mycompany.corp/Alfresco/doc/site/ - Session: Disconnecting
ftp://alfresco.mycompany.corp/Alfresco/doc/site/ - Session: Disconnected
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Error uploading site
Embedded error: Unable to change cwd on ftp server to ./ when processing D:\project\workspaces\yyyyy\myproject\target\site
I can't figure out what the problem. Maybe the plugin version is not compatible... Maybe Alfresco's implementation is not full compatible (forgive me for this outrage ;)), maybe there is a configuration in the server properties I missed.
I don't really know where to look for, and after some time googlin', I can't find what the matter.
I have already some workarounds. I'll try to upload the website using webdav protocol, and I can use some extra features (like deploy artifatcts of Jenkins) on our CI plateform, but I really want to know what's the problem.
Can someone help me ?
Indeed, it looks like an Alfresco issue : issues.alfresco.com/jira/browse/ALF-4724.
I'm running under Alfresco 3.1, and this issue seems to be solved in 3.3.5 and above.

How to use pentaho kitchen to connect database repository?

How to use pentaho kitchen to connect my central database repository under commandline?
set up your connection in repositories.xml, you probably already have one of these if you have been using spoon. Make sure the repositories.xml exists in .kettle for the installation where you are running kitchen.
then simply use these command line options:
/rep "YOUR REPO NAME"
/user "REPO USER"
/pass "REPO PSS"
Below, an Windows script batch example to run a Pentaho Data Integration kettle Job :
#echo off
SET LOG_PATHFILE=C:\logs\KITCHEN_name_of_job_%DATETIME%.log
call Kitchen.bat /rep:"name_repository" /job:"name_of_job" /dir:/foo/sub_foo1 /user:dark /pass:vador /level:Detailed >> %LOG_PATHFILE%`
The repository "name_repository" must be defined in /users/.kettle/repositories.xml. Juste below an example of this file :
<?xml version="1.0" encoding="UTF-8"?>
<repositories>
<connection>
<name>name_repository</name>
<server>hostname</server>
<type>MYSQL</type>
<access>Native</access>
<database>name_database_repository</database>
<port>9090</port>
<username>[name]</username>
<password>[password]</password>
<servername/>
<data_tablespace/>
<index_tablespace/>
<attributes>
<attribute><code>EXTRA_OPTION_MYSQL.defaultFetchSize</code><attribute>500</attribute></attribute>
<attribute><code>EXTRA_OPTION_MYSQL.useCursorFetch</code><attribute>true</attribute></attribute>
<attribute><code>FORCE_IDENTIFIERS_TO_LOWERCASE</code><attribute>N</attribute></attribute>
<attribute><code>FORCE_IDENTIFIERS_TO_UPPERCASE</code><attribute>N</attribute></attribute>
<attribute><code>IS_CLUSTERED</code><attribute>N</attribute></attribute>
<attribute><code>PORT_NUMBER</code><attribute>9090</attribute></attribute>
<attribute><code>QUOTE_ALL_FIELDS</code><attribute>N</attribute></attribute>
<attribute><code>STREAM_RESULTS</code><attribute>Y</attribute></attribute>
<attribute><code>SUPPORTS_BOOLEAN_DATA_TYPE</code><attribute>N</attribute></attribute>
<attribute><code>USE_POOLING</code><attribute>N</attribute></attribute>
</attributes>
</connection>
<repository>
<id>KettleDatabaseRepository</id>
<name>name_repository</name>
<description>the pentaho data integraion kettle repository</description>
<connection>name_repository</connection>
</repository>

Resources