WireMock with multiple hosts and dnsmasq? - dnsmasq

I'm working on mocking endpoints for an iOS app that hits over a dozen different HTTPS hosts. Based on my understanding from the WireMock docs and this answer from the maintainer, WireMock is designed to proxy / mock only one host. My current plan is to use dnsmasq to point one host at WireMock at a time.
+----------+ +-------------------+
| | | |
| WireMock +-->Proxy+---> 123.example.com |
| | | |
+----^-----+ +-------------------+
|
| +-------------------+
+-------+ +----+----+ | |
| +---> https://123.example.com +---> | +-----> xyz.example.com |
| | | | | | |
| App +---> https://xyz.example.com +---> dnsmasq +---+ +-------------------+
| | | |
| +---> https://9.different.com +---> +---+ +-------------------+
+-------+ +---------+ | | |
+-----> 9.different.com |
| |
+-------------------+
This seems pretty clunky, is there a better way to mock multiple hosts like this? One of the primary constraints is that these have to be tested over HTTPS and not unencrypted.

You should be able to achieve this by creating stubs that have a host header condition in the request. That way e.g. a request to
GET /hello
Host: firsthost.example.com
And a request to
GET /hello
Host: secondhost.example.com
would match different stubs and there return different responses.

Related

OAauth2 Authorization Server - external login page and redirect after login

I am trying to integrate Spring Auth Server with an existing authentication provider, IBM WebSeal in the context of an OICD flow.
Basically, I want Spring Auth Server to use Webseal login page to authenticate a user (coming from a SPA/front-end app) and then return a JWT token to the front-end app.
On successful authentication, WebSeal redirects to a configurable URL adding a header to the request. This header contains the actual username and signals that the user is authenticated.
I was able to implement the flow and have Spring Auth Server use the external login page, but I don't understand to which URL should WebSeal redirect. Do I need to create an explicit end-point (such as /authenticated)?
It seems that the OAUTH2 spec doesn't define an explicit endpoint for this particular case.
Adding diagram for clarity:
+----------------+ +------------------+
| | | |
| | | |
| FRONT END APP | | BACK-END APP |
| | | (SPRING BOOT) |
| | | |
| | | |
+---|----|-------+ +------------------+
| |
4 - /oauth2/token | | 1 - /oauth2/authorize?
| |
| |
| |
+---|----|-------+ +----------------+
| | 3 - send header | |
| SPRING -------------------- |
| AUTH | | WEBSEAL |
| SERVER -------------------| |
| | 2 - show form | |
| | +----------------+
+----------------+
Thanks!

Force HTTPS using Namecheap and Heroku

I am using Heroku Automated Certificate Management and Namecheap Basic DNS.
My problem is that my non-SSL domains are still reachable.
Here is how they map in practice:
|---------------------|------------------------------|
| Entered Domain | Result Domain |
|---------------------|------------------------------|
| name.tld | https://www.name.tld/ |
|---------------------|------------------------------|
| www.name.tld | http://www.name.tld/ |
|---------------------|------------------------------|
| http://www.name.tld | http://www.name.tld/ |
|---------------------|------------------------------|
|https://www.name.tld | https://www.name.tld/ |
|---------------------|------------------------------|
| http://name.tld | https://www.name.tld/ |
|---------------------|------------------------------|
| https://name.tld | error: does not resolve |
|---------------------|------------------------------|
My Heroku Domains settings are:
|---------------------|-------------------|
| Domain Name | DNS Target |
|---------------------|-------------------|
| name.tld |name1.herokudns.com|
|---------------------|-------------------|
| www.name.tld |name2.herokudns.com|
|---------------------|-------------------|
My Namecheap Redirect Domain settings are:
|---------------------|---------------------|
| Source URL | Destination URL |
|---------------------|---------------------|
| name.tld |https://www.name.tld/|
|---------------------|---------------------|
| www.name.tld |https://www.name.tld/|
|---------------------|---------------------|
And my Namecheap Host Records settings are:
|---------------------|---------------------|---------------------|
| Type | Host | Value |
|---------------------|---------------------|---------------------|
| CNAME Record | www | name1.herokudns.com.|
|---------------------|---------------------|---------------------|
| URL Redirect Record | # |https://www.name.tld/|
|---------------------|---------------------|---------------------|
Something to note is I do not put name2.herokudns.com into Namecheap because it would conflict, I think.

Share objects across multiple containers

We develop a spring-boot application which is deployed on OpenShift 3. The application should be scalable to at least two pods. But we use internal caches and other "global" data (some lists, some maps...) which should be the same (i.e. shared) for all the pods.
Is there a way to achieve such data sharing by a) a service, which is embedded inside the spring-boot application itself (this implies that each pod needs to find/know each other) or does it b) need in every case a standalone (potentially also scalable) cache service?
a)
|---- Application ----|
| |
| |-------------| |
| | Pod 1 | * | |
| |----------^--| |
| | |
| |----------v--| |
| | Pod 2 | * | |
| |----------^--| |
| | |
| |----------v--| |
| | Pod n | * | |
| |-------------| |
| |
|----------------------
* "embedded cache service"
b)
|---- Application ----|
| |
| |-------------| |
| | Pod 1 | |-----\
| |-------------| | \
| | | \
| |-------------| | \ |-----------------------|
| | Pod 2 | |-----------| Cache Service/Cluster |
| |-------------| | / |-----------------------|
| | | /
| |-------------| | /
| | Pod n | |------/
| |-------------| |
| |
|----------------------
Typically, if we would use memcached or redis I think b) would be the only solution. But how is it with Hazlecast?
With Hazelcast, you can both use a & b.
For scenario a, assuming you're using k8s on OpenShift, you can use Hazelcast Kubernetes discovery plugin so that pods deployed in the same k8s cluster discover themselves & form a cluster: https://github.com/hazelcast/hazelcast-kubernetes
For scenario b, Hazelcast has an OpenShift image as well, which requires an Enterprise subscription: https://github.com/hazelcast/hazelcast-openshift. If you need open-source version, you can use Hazelcast Helm Chart to deploy data cluster separately: https://github.com/helm/charts/tree/master/stable/hazelcast

How to run the same build with different predefined parameters in TeamCity

I try to improve our build process and use 2-3 predefined parameters for run on one single build.
Description: we have build configuration with parameters C1, C2, C3 and related build steps B1, B2, B3. They link to each other C1-B1, C2-B2, C3-B3. In this scheme all works fine, I pass parameters like mentioned here - How to pass Arguments between build configuration steps in team city?,
but I'm a bit worried because B1, B2, B3 are full copy each other and this i would like to improve it. Only one problem I couldn't find any mechanism to pass parameters from different configuration.If I use %dep. mechanism i can use parameter from only one configuration.
UPD: Currect scheme
+---+ +---+ +---+
| | | | | |
| C1| | C2| | C3|
| | | | | |
+-+-+ +-+-+ +-+-+
| | |
| | |
+-v-+ +-v-+ +-v-+
| | | | | |
| B1| | B2| | B3|
| | | | | |
+---+ +---+ +---+
the desired scheme:
+---+ +---+ +---+
| | | | | |
| C1| | C2| | C3|
| | | | | |
+-+-+ +-+-+ +-+-+
| | |
| | |
| +-v--+ |
| | | |
| | | |
+-----> B1 <----+
| |
+----+
C1, C2, C3 setup configuration parameters
B1 contains only build steps, like clean, build, dist
Could anyone help me with that? Any ideas?
In your case, you can introduce a TeamCity Metarunner. The idea is that you combine multiple build steps with parameters, and extract them as new entity, available as build runner.
See this documentation section for step by step instructions on creating a metarunner.
You can define a parameter in the template, call it, for example, external.param and give no definition.
Then, in each configuration (C1, C2 and C3) define the value of this parameter as reference to specific dependency.
external.param = %dep.<source_cfg_id>.<source_param_name>%

In RobotFramework, is it possible to run test cases in For-Loop?

So my issues might be of syntactic nature, maybe not, but I am clueless on how to proceed next. I am writing a test case on the Robot Framework, and my end goal is to be able to run ,multiple tests, back to back in a Loop.
In this cases below, the Log to Console call works fine, and outputs the different values passed as parameters. The next call "Query Database And Analyse Data" works as well.
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
But then, when I try to makes a test cases with documentation and tags with "Query Database And Analyse Data", I get the Error: Keyword Name cannot be Empty, which leads me to think that when the file gets to [Documentation tag], it doesn't understand that it is part of a test case. This is usually how I write test cases.
Please note here that the indentation tries to match the inside of the loop
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | | [Documentation] | Query DB.
| | | | [Tags] | query | voltagevariation
| | | Duplicates Test
| | | | [Documentation] | Packets should be unique.
| | | | [Tags] | packet_duplicates | system
| | | | Duplicates
| | | Chroma Output ON
| | | | [Documentation] | Setting output terminal status to ON
| | | | [Tags] | set_output_on | voltagevariation
| | | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
Now is this a syntax problem, indentation issue, or is it just plain impossible to do what I'm trying to do? If you have written similar cases, but in a different manner, please let me know!
Any help or input would be highly appreciated!
You are trying to use Keywords as Test Cases. This approach is not supported by Robot Framework.
What you could do is make one Test Case with a lot of Keywords:
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | Duplicates
| | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
*** Keywords ***
| Query Database And Analyse Data
| | Do something
| | Do something else
...
You can't really fit [Tags] anywhere useful. You can, however, fire meaningful fail messages (substituting the [Documentation]) if instead of using a Keyword directly you wrapped it in Run Keyword And Return Status.
Furthermore, please have a look at data driven tests to get rid of the :FOR-loop completely.

Resources