Get protractor sessionId into a file - jasmine

I need to get the job ID / session ID from a protractor run into a file so I can create links to screenshots / videos at Saucelabs . Is there a correct way to do this?
One approach I'm looking at is to get the session ID from the browser object then pass to a custom reporter that writes it to a file:
// protractor.conf.js
onPrepare: function () {
var sessionIdP = q.defer();
browser.getSession().then(function(session) {
sessionIdP.resolve(session.getId());
});
jasmine.getEnv().addReporter(new SessionIdWriter({
sessionId: sessionIdP
});
}
Should work but can this be done more cleanly?
I'm aware that Saucelabs offers a REST api that can return the latest job ID, but this presents a race condition with other users of the account. Besides the ID is known locally so a call shouldn't be needed.

I think what you are looking for are build: 'some build number' and name: 'my awesome webpage' properties in the capabilities section of your config file. these parameters will be pass through to your SL account and show up in the test run table
more info available https://docs.saucelabs.com/reference/test-configuration/#job-annotation

Related

How to allow clients to hook into preconfigured hooks in application

I have a pretty standard application with a frontend, a backend and some options in the frontend for modifying data. My backend fires events when data is modified (eg. record created, record updated, user logged in, etc.).
Now what I want to do is for my customers to be able to code their own functions and "hook" them into these events.
So far the approaches I have thought of are:
Allowing users in the frontend to write some code in a codeeditor like codemirror, but this whole storing code and executing it with some eval() seems kind of risky and unstable.
My second approach is illustrated below (to the best of my ability at least). The point is that the CRUD API calls a different "hook" web service that has these (recordUpdated, recordCreated, userLoggedIn,...) hook methods exposed. Then the client library needs to extend some predefined interfaces for the different hooks I expose. This still seems doable, but my issue is I can't figure out how my customers would deploy their library into the running "hook" service.
So it's kind of like webhooks, except I already know the exact hooks to be created which I figured could allow for an easier setup than customers having to create their own web services from scratch, but instead just create a library that is then deployed into an existing API (or something like that...). Preferably the infrastructure details should be hidden from the customers so they can focus solely on making business logic inside their custom hooks.
It's kind of hard to explain, but hopefully someone will get and can tell me if I'm on the right track or if there is a more standard way of doing hooks like these?
Currently the entire backend is written in C# but that is not a requirement.
I'll just draft out the main framework, then wait for your feedback to fill in anything unclear.
Disclaimer: I don't really have expertise with security and sandboxing. I just know it's an important thing, but really, it's beyond me. You go figure it out 😂
Suppose we're now in a safe sandbox where all malicious behaviors are magically taken care, let's write some Node.js code for that "hook engine".
How users deploy their plugin code.
Let's assume we use file-base deployment. The interface you need to implement is a PluginRegistry.
class PluginRegistry {
constructor() {
/**
The plugin registry holds records of plugin info:
type IPluginInfo = {
userId: string,
hash: string,
filePath: string,
module: null | object,
}
*/
this.records = new Map()
}
register(userId, info) {
this.records.set(userId, info)
}
query(userId) {
return this.records.get(userId)
}
}
// plugin registry should be a singleton in your app.
const pluginRegistrySingleton = new PluginRegistry()
// app opens a http endpoint
// that accepts plugin registration
// this is how you receive user provided code
server.listen(port, (req, res) => {
if (isPluginRegistration(req)) {
let { userId, fileBytes, hash } = processRequest(req)
let filePath = pluginDir + '/' + hash + '.js'
let pluginInfo = {
userId,
// you should use some kind of hash
// to uniquely identify plugin
hash,
filePath,
// "module" field is left empty
// it will be lazy-loaded when
// plugin code is actually needed
module: null,
}
let existingPluginInfo = pluginRegistrySingleton.query(userId)
if (existingPluginInfo.hash === hash) {
// already exist, skip
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('ok');
} else {
// plugin code written down somewhere
fs.writeFile(filePath, fileBytes, (err) => {
pluginRegistrySingleton.register(userId, pluginInfo)
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('ok');
})
}
}
})
From the perspective of hook engine, it simply opens a HTTP endpoint to accept plugin registration, agnostic to the source.
Whether it's from CI/CD pipeline, or plain web interface upload, it doesn't matter. If you have CI/CD setup for your user, it is just a dedicated build machine that runs bash scripts isn't it? So just fire a curl call to this endpoint to upload whatever you need. Same applies to web interface.
How we would execute plugin code
User plugin code is just normal Node.js module code. You might instruct them to expose certain API and conform to your protocol.
class HookEngine {
constructor(pluginRegistry) {
// dependency injection
this.pluginRegistry = pluginRegistry
}
// hook
oncreate(payload) {
// hook call payload should identify the user
const pluginInfo = this.pluginRegistry.query(payload.user.id)
// lazy-load the plugin module when needed
if (pluginInfo && !pluginInfo.module) {
pluginInfo.module = require(pluginInfo.filePath)
}
// user plugin module is just normal Node.js module
// you load it with `require`, and you call what ever function you need.
pluginInfo.module.oncreate(payload)
}
}

Ensure that fixtures exists when running a test. Control order of tests running

A lot of this is wrapped in commands, but I've left that part out to make the problem more feasible.
Consider these two tests:
# Test1: Test login for user
- Step1: Logs in manually (go to login-URL, fill out credentials and click 'Log in').
- Step2: Save auth-cookies as fixtures.
# Test2: Test something is dashboard for user.
- Step1: Set auth-cookies (generated in Test1)
- Step2: Visits https:://example.org/dashboard and ensures the user can see the dashboard.
If they run as written as listed above, then everything is fine.
But if Test2 runs before Test1, then Test2 will fail, since Test1 hasn't to generated the cookies yet.
So Test1 is kind of a prerequisite for Test2.
But Test1 doesn't need to run every time Test2 runs - only if the auth-cookies aren't generated.
I wish I could define my Test2 to be like this instead:
Test2: Test something is dashboard for user.
- Step1: Run ensureAuthCookiesExists-command
- Step2: If the AuthCookies.json-fixture doesn't exist, then run Test1
- Step3: Sets auth-cookies (generated in Test1)
- Step4: Visits https:://example.org/dashboard and ensures the user can see the dashboard.
Solution attempt 1: Control by order
For a long time I've done this using this answer: How to control order of tests. And then having my tests defines like this:
{
"baseUrl": "http://localhost:5000",
"testFiles": [
"preparations/*.js",
"feature-1/check-header.spec.js",
"feature-2/check-buttons.spec.js",
"feature-3/check-images.spec.js",
"feature-4/check-404-page.spec.js",
//...
]
}
But that is annoying, since it means that I keep having to add to add new features to that list, which get's annoying.
And this only solves the problem if I want to run all the tests. If I want to run preparations.spec.js and thereafter: feature-2/check-buttons.spec.js. Then I can't do that easily.
Solution attempt 2: Naming tests smartly
I also tried simply naming them appropriately, like explain here: naming your tests in Cypress.
But that pollutes the naming of the tests, making it more cluttered. And it faces the same issues as solution attempt 1 (that I can't easily run two specific tests after one another).
Solution attempt 3: Making a command for it
I considered making a command that tests for it. Here is some pseudo-code:
beforeEach(() => {
if( preparationsHasntCompleted() ){
runPreparations();
}
}
This seems smart, but it would add extra runtime to all my tests.
This may not suit your testing aims, but the new cy.session() can assure cookie is set regardless of test processing order.
Use it in support in beforeEach() to run before every test.
The first test that runs (either test1 or test2) will perform the request, subsequent tests will use cached values (not repeating the requests).
// cypress/support/e2e.js -- v10 support file
beforeEach(() => {
cy.session('init', () => {
// request and set cookies
})
})
// cypress/e2e/test1.spec.cy.js
it('first test', () => {
// the beforeEach() for 1st test will fire the request
...
})
// cypress/e2e/test2.spec.cy.js
it('second test', () => {
// the beforeEach() for 2nd test will set same values as before from cache
// and not resend the request
})
Upside:
performing login once per run (ref runtime concerns)
performing tests in any order
using the same token for all tests in session (if that's important)
Downside:
if obtaining auth cookies manually (vi UI), effectively moving the login test to a beforeEach()
Example logging in via request
Rather than obtaining the auth cookie via UI, it may be possible to get it via cy.request().
Example from the docs,
cy.session([username, password], () => {
cy.request({
method: 'POST',
url: '/login',
body: { username, password },
}).then(({ body }) => {
cy.setCookie('authToken', body.token)
})
})
It is generally not recommended to write tests that depend on each other as stated in the best practices. You can never be sure that they run correctly. In your case if the first test fails all the other ones will fail, even if the component/functionality is functioning propperly. Especially if you test more than just the pure login procedure, e.g. design.
As #Fody said before you can ensure being logged in in the beforeEach() hook.
I would do it like this:
Use one describe to test the login via UI.
Use another describe where you put the login (via REST) in the before() and the following command Cypress.Cookies.preserveOnce('<nameOfTheCookie>'); in the beforeEach() to not delete the test for the following it()s

How to integrate Google Picker with new Google Identity Services JavaScript library

Because of the known issue described in here (https://developers.google.com/identity/sign-in/web/troubleshooting) I want to update my application to be using the new gsi sign-in that uses less cookies than the previous versions and therefore might have the solution for the mentioned error...
My problem is that there's little to no documentation on how to integrate google picker with the new gsi.
I used to use gapi for some picker-related code like even loading the library gapi.load('picker', () => {}). The migration doc says to replace the apis.google.com/js/api.js with the new gsi url, and a lot of other methods such as googleAuth.signIn or gapi.client.init are now to be deprecated by 2023. But then:
How to load picker without gapi available? Or gapi still needs to be imported but will not contain any sign-in related methods?
How will I pass apiKey and scopes to be able to init googlePicker?
For methods such as GoogleAuth.isSignedIn docs simply states "Remove. A user's current sign-in status on Google is unavailable. Users must be signed-in to Google for consent and sign-in moments." what does that even mean? I need to check if user is signed in in order to not show again the popup every time they want to upload a file from gPicker...
Before, we used to have a access_token on the callback of a reloadAuthResponse or a signIn, now how do we get the token?
Sorry for the question being confusing, I'm very confused with everything. Any input helps, thanks!
I came across https://developers.google.com/identity/oauth2/web/guides/use-token-model through: How to use scoped APIs with (GSI) Google Identity Services
I changed our code to load this script: https://accounts.google.com/gsi/client, and then modified the our "authorize" function (see below) to use window.google.accounts.oauth2.initTokenClient instead of window.gapi.auth2.authorize to get the access_token.
Note that the callback has moved from the second argument of the window.gapi.auth2.authorize function to the callback property of the first argument of the window.google.accounts.oauth2.initTokenClient function.
After calling tokenClient.requestAccessToken() (see below), the callback passed to window.gapi.auth2.authorize is called with an object containing access_token.
const authorize = () =>
- new Promise(res => window.gapi.auth2.authorize({
- client_id: GOOGLE_CLIENT_ID,
- scope: GOOGLE_DRIVE_SCOPE
- }, res));
+ new Promise(res => {
+ const tokenClient = window.google.accounts.oauth2.initTokenClient({
+ client_id: GOOGLE_CLIENT_ID,
+ scope: GOOGLE_DRIVE_SCOPE,
+ callback: res,
+ });
+ tokenClient.requestAccessToken();
+ });
The way access_token is used was not changed:
new window.google.picker.PickerBuilder().setOAuthToken(access_token)
#piannone is correct, adding to their answer:
You'll still need to load 'client' code, as you're using authentication. That means you'll still include https://apis.google.com/js/api.js in your list of scripts. Only don't load 'auth2'. So, while you won't do:
gapi.load('auth2', onAuthApiLoad);
gapi.load('picker', onPickerApiLoad);
you will need to:
gapi.load('client', onAuthApiLoad);
gapi.load('picker', onPickerApiLoad);
(this is instead of directly loading https://accounts.google.com/gsi/client.js I guess.)

Zapier CLI Trigger - How to use defined sample data when no results returned during setup

I am trying to prototype a trigger using the Zapier CLI and I am running to an issue with the 'Pull In Samples' section when setting up the trigger in the UI.
This tries to pull in a live sample of data to use, however the documentation states that if no results are returned it will use the sample data that is configured for the trigger.
In most cases there will be no live data and so ideally would actually prefer the sample data to be used in the first instance, however my trigger does not seem to ever use the sample and I have not been able to find a concrete example of a 'no results' response.
The API I am using returns XML so I am manipulating the result into JSON which works fine if there is data.
If there are no results so far I have tried returning '[]', but that just hangs and if I check the zapier http logs it's looping http requests until I cancel the sample check.
Returning '[{}]' returns an error that I need an 'id' field.
The definition I am using is:
module.exports = {
key: 'getsmsinbound',
noun: 'GetSMSInbound',
display: {
label: 'Get Inbound SMS',
description: 'Check for inbound SMS'
},
operation: {
inputFields: [
{ key: 'number', required: true, type: 'string', helpText: 'Enter the inbound number' },
{ key: 'keyword', required: false, type: 'string', helpText: 'Optional if you have configured a keyword and you wish to check for specific keyword messages.' },
],
perform: getsmsinbound,
sample: {
id: 1,
originator: '+447980123456',
destination: '+447781484146',
keyword: '',
date: '2009-07-08',
time: '10:38:55',
body: 'hello world',
network: 'Orange'
}
}
};
I'm hoping it's something obvious as on scouring the web and Zapier documentation I've not had any luck!
Sample data must be provided from your app and the sample payload is not used for this poll specifically. From the docs:
Sample results will NOT be used for a user's Zap testing step. That
step requires data to be received by an event or returned from a
polling URL. If a user chooses to "Skip Test", then the sample result,
if provided, will be used.
Personally, I have never seen "Skip Test" show up. A while back I asked support about this:
That's a great question! It's definitely one of those "chicken and
egg" situations when using REST Hooks - if there isn't a sample
available, then everything just stalls.
When the Zap editor tries to obtain a "sample result", there are three
places where it's going to look:
The Polling endpoint (in Step #3 of your trigger's setup) is invoked for the current user. If that returns "nothing", then the Zap
editor will try the next step.
The "most recent record/data" in the Zap's history. Since this is a brand new Zap, there won't be anything present.
The Sample result (in Step #4 of your trigger's setup). The Zap editor will tell the user that there's "nothing to show", and will
give the user the option to "skip test and continue", which will use
the sample JSON that you've provided here.
In reality, it will just continue to retry the request over and over and never provide the user with a "skip test and continue" option. I just emailed again asking if anything has changed since then, but it looks like existing sample data is a requirement.
Perhaps create a record in your API by default and hide it from normal use and just send back that one?
Or send back dummy data even though Zapier says not to. Not sure, but I don't know how people can set up a zap when no data has been created yet (and Zapier says not many of their apps have this issue, but nearly every trigger I've created and ever use case for other applications would hint to me otherwise).

Suppress REST Logging Calls from Supertest

I have started using MEAN stack and currently writing REST unit tests using Super Test
I want to have a little bit more clarity in my log file so that I can easily see my successful and failed tests.
I wish to suppress the console output for the actual rest API call which I think are coming out of SuperTest.
This image shows the logs I want to suppress.
I think it's actually coming from expressjs/morgan. I've gotten around it by setting the env to test and disabling morgan for the test env.
In my test files:
process.env.NODE_ENV = 'test';
In app.js:
if(app.get('env') !== 'test') app.use(logger('dev'));
You can setup morgan to accept a skip function.
Then, you can, say, toggle an env variable on/off - or define skipping logic of your own to temporarily mute the logging.
app.use(
logger('dev', {
skip: function(req, res) {
return process.env.MUTE_LOGGER === 'on';
},
}),
);

Resources