Accessing fixture data inside `it` "a test case" - cypress

How can I access the following fixtures inside an it block:
users.json
{
"users": [
{
"first_name": "Fadi",
"last_name": "Salam",
"work_email": "fadi.salam#bayzat.com"
},
{
"first_name": "Maha",
"last_name": "Black",
"work_email": "maha.black#bazyat.com"
}
]
}
My cypress related function code:
descibe('test', () => {
beforeEach(function(){
cy.restoreToken()
cy.fixture('users.json').as('users')
})
it('Add an Employee', function() {
cy.get('#users').then((users) => {
const user_1 = users[0]
cy.add_employee(user_1.first_name, user_1.last_name, user_1.work_email)
}
)}
})
I am unable to access first_name, ... etc
How can I do it?

I've changed a few typos in your code and make it work.
In your users.json file 'first_name' is nested under 'users'.
You can use
users.users[0].first_name to access first_name. Sample code below,
cy.get('#users').then((users) => {
console.log(users.users[0].first_name);
})
The console would print 'Fadi' in your case.

If you want to access this fixure in all contexts and it blocks, you can do the following:
const USERS;
before (() => {
cy.fixture('users.json').then(($users) => USERS = $users);
}
Then you access the const.
cy.getUserName().should('have.text', USERS[0].first_name);

Related

Cypress how to stored input data and parse for verification

I have a table form at webpage. User filled up and hit upload then result firm is displayed.
Table a input abc
Table b input 123
I want to verify the data is displayed at result page.
How do i store the value abc 123 for verification? To avoid linear coding, i do not want to rewrite those values again and to prevent if test data changed, i just need to change once.
To test the tables/forms, iterate the expected results.
it('tests form', () => {
const expected = [
[ 'a', 'abc' ],
[ 'b', '123' ],
]
expected.forEach(exp => {
cy.get(`table#${exp[0]}`) // construct selector
.find('input')
.invoke('val')
.should('eq', exp[1]) // verify
})
})
You can create a json file and save it under fixtures like data.json:
{
"tableA": {
"name": "table a",
"selector": "input"
"value": "abc",
"checkbox": "be.checked"
},
"tableB": {
"name": "table b",
"selector": "input"
"value": 123,
"checkbox": "not.be.checked"
}
}
And in your tests you can write:
describe('Some page', () => {
beforeEach(function() {
// "this" points at the test context object
cy.fixture('data.json').then((data) => {
this.data = data
})
})
it('has user', function() {
expect(this.data.tableA.name).to.equal('table a')
expect(this.data.tableA.selector).to.equal('input')
expect(this.data.tableA.value).to.equal('abc')
cy.get('selector').should(this.data.tableA.checkbox)
})
})
Important Note:
If you store and access the fixture data using this test context
object, make sure to use function () { ... } callbacks. Otherwise the
test engine will NOT have this pointing at the test context.

cypress XHR API call validation in test cases

I am exploring if I can use cypress for end-to-end testing or not in angular? I am super beginner in it.
Cypress has some server() instance for XHR.
Question:
Suppose I am testing the login page so I can write test cases for querying elements and do the validation. In this process the browser will be making some API call, will it possible to write test cases for validating what was the statusCode the API had retured? What was XHR API response etc?
of course. With cypress you can spy the requests or mock them.
I have written a quick example to show you both methods:
describe("test", () => {
it("spy", () => {
cy.server();
cy.route("POST", /.*queries.*/).as("request")
cy.visit("https://docs.cypress.io/")
.get("#search-input").type("1234567890")
.wait("#request").then(xhr => {
expect(xhr.status).to.eq(200)
})
})
it("mock", () => {
cy.server();
const obj = JSON.parse(`
{
"results": [{
"hits": [{
"hierarchy": {
"lvl2": null,
"lvl3": null,
"lvl0": "Podcasts",
"lvl1": null,
"lvl6": null,
"lvl4": null,
"lvl5": null
},
"url": "https://stackoverflow.com",
"content": "mocked",
"anchor": "sidebar",
"objectID": "238538711",
"_snippetResult": {
"content": {
"value": "mocked",
"matchLevel": "full"
}
},
"_highlightResult": {
"hierarchy": {
"lvl0": {
"value": "Podcasts",
"matchLevel": "none",
"matchedWords": []
}
},
"content": {
"value": "mocked",
"matchLevel": "full",
"fullyHighlighted": false,
"matchedWords": ["testt"]
}
}
}
]
}
]
}
`);
cy.route("POST", /.*queries.*/, obj)
cy.visit("https://docs.cypress.io/")
.get("#search-input").type("1234567890")
.get("#algolia-autocomplete-listbox-0").should("contain", "mocked")
})
})
The spy example receives the raw XHR object and thus you are able to check the status code and so on.
The mock example shows you how you can mock any ajax request.
Please note: Currently you can not spy & mock fetch requests. But as far as I know they are rewriting the network layer in order to make this possible. Let me know if you need further assistance

Create GraphQL connection between User and Role Type via Role and query them

I want to create a GraphQL connection between User <> Role <> Role_Type and finally get the User Role Type back with a query. Here is brake it down to the only important lines of code:
type Query {
getUsers: [User]
}
type User {
_id: ID
firstname: String
roles: [Role]
}
type Role {
_id: ID
role_type_id: ID
role_types: [Role_Type]
user_id: ID
}
type Role_Type {
_id: ID
name: String
}
and in the User resolver I have:
Query: {
getUser: async (root, { _id }) => {
return prepare(await DBUsers.findOne(ObjectId(_id)))
}
},
User: {
roles: async ({_id}) => {
return (await MongoDBRoles.find({user_id: _id}).toArray()).map(prepare)
}
}
and for the Role resolver:
Role: {
role_types: async ({_id}) => {
return (await MongoDBRoleTypes.find({role_type_id: _id}).toArray()).map(prepare)
},
},
When I query now with:
{
getUser(_id: "5d555adcd2c22a242863f7a1") {
firstname
roles {
_id
role_type_id
user_id
role_types {
name
}
}
}
}
I get:
{
"data": {
"getUser": {
"firstname": "Gregor",
"roles": [
{
"_id": "5d90cf352f50882ab0ce3877",
"role_type_id": "5d90ce48b7893d19bcc328f9",
"user_id": "5d555adcd2c22a242863f7a1",
"role_types": []
}
]
}
}
}
But why is role_types empty. As you can see the role_type_id is filled. So why there is no connection.
When I watch into MongoDB I can see the Role Type of the user.
If you need more Schema/Resolver let me know.
Ok it was kind of easy to fix. Actually the Role resolver was going multiple matches. But somehow it needs to be single lookup. That does not make much sense but somehow the search for the id lookup is a single lookup. Anyway here is what I have changed and now it works properly.
Replace this:
role_types: async ({_id}) => {
return (await MongoDBRoleTypes.find({role_type_id: _id}).toArray()).map(prepare)
},
with:
role_type: async ({role_type_id}) => {
return prepare(await DBRoleTypes.findOne(ObjectId(role_type_id)))
},
and you fix this.

Parallel promise execution in resolve functions

I have a question about handling promises in resolve functions for a GraphQL client. Traditionally, resolvers would be implemented on the server, but I am wrapping a REST API on the client.
Background and Motivation
Given resolvers like:
const resolvers = {
Query: {
posts: (obj, args, context) => {
return fetch('/posts').then(res => res.json());
}
},
Post: {
author: (obj, args, _, context) => {
return fetch(`/users/${obj.userId}`)
.then(res => res.json());
.then(data => cache.users[data.id] = data)
}
}
};
If I run the query:
posts {
author {
firstName
}
}
and the Query.posts() /posts API returns four post objects:
[
{
"id": 1,
"body": "It's a nice prototyping tool",
"user_id": 1
},
{
"id": 2,
"body": "I wonder if he used logo?",
"user_id": 2
},
{
"id": 3,
"body": "Is it even worth arguing?",
"user_id": 1
},
{
"id": 4,
"body": "Is there a form above all forms? I think so.",
"user_id": 1
}
]
the Post.author() resolver will get called four times to resolve the author field.
grapqhl-js has a very nice feature where each of the promises returned from the Post.author() resolver will execute in parallel.
I've further been able to eliminate re-fetching author's with the same userId using facebook's dataloader library. BUT, I'd like to use a custom cache instead of dataloader.
The Question
Is there a way to prevent the Post.author() resolver from executing in parallel? Inside the Post.author() resolver, I would like to fetch authors one at a time, checking my cache in between to prevent duplicate http requests.
But, right now the promises returned from Post.author() are queued and executed at once, so I cannot check the cache before each request.
Thank you for any tips!
I definitely recommend looking at DataLoader as it's designed to solve exactly this problem. If you don't use it directly, at least you can read its implementation (which is not that many lines) and borrow the techniques atop your custom cache.
GraphQL and the graphql.js libraries themselves are not concerned with loading data - they leave that up to you via resolver functions. Graphql.js is just calling these resolver functions as eagerly as it can to provide for the fastest overall execution of your query. You can absolutely decide to return Promises which resolve sequentially (which I wouldn't recommend), or—as DataLoader implements—deduplicate with memoization (which is what you want for solving this).
For example:
const resolvers = {
Post: {
author: (obj, args, _, context) => {
return fetchAuthor(obj.userId)
}
}
};
// Very simple memoization
var authorPromises = {};
function fetchAuthor(id) {
var author = authorPromises[id];
if (!author) {
author = fetch(`/users/${id}`)
.then(res => res.json());
.then(data => cache.users[data.id] = data);
authorPromises[id] = author;
}
return author;
}
Just for some people who use dataSource for REST api stuff along with dataLoader(in this case, it doesn't really help as it's a single request). Here is a simple caching solution/example.
export class RetrievePostAPI extends RESTDataSource {
constructor() {
super()
this.baseURL = 'http://localhost:3000/'
}
postLoader = new DataLoader(async ids => {
return await Promise.all(
ids.map(async id => {
if (cache.keys().includes(id)) {
return cache.get(id)
} else {
const postPromise = new Promise((resolve, reject) => {
resolve(this.get(`posts/${id}`))
reject('Post Promise Error!')
})
cache.put(id, postPromise, 1000 * 60)
return postPromise
}
})
)
})
async getPost(id) {
return this.postLoader.load(id)
}
}
Note: here I use memory-cache for caching mechanism.
Hope this helps.

How do I return only selected certain fields in Strapi?

Pretty straightforward (I hope). I'd like to be able to use the API endpoint and have it only return specified fields. I.E. something like this
http://localhost:1337/api/reference?select=["name"]
Would ideally return something of the form
[{"name": "Ref1"}]
Unfortunately that is not the case, and in actuality it returns the following.
[
{
"contributors": [
{
"username": "aduensing",
"email": "standin#gmail.com",
"lang": "en_US",
"template": "default",
"id_ref": "1",
"provider": "local",
"id": 1,
"createdAt": "2016-07-28T19:39:09.349Z",
"updatedAt": "2016-07-28T19:39:09.360Z"
}
],
"createdBy": {
"username": "aduensing",
"email": "standin#gmail.com",
"lang": "en_US",
"template": "default",
"id_ref": "1",
"provider": "local",
"id": 1,
"createdAt": "2016-07-28T19:39:09.349Z",
"updatedAt": "2016-07-28T19:39:09.360Z"
},
"updatedBy": {
"username": "aduensing",
"email": "standin#gmail.com",
"lang": "en_US",
"template": "default",
"id_ref": "1",
"provider": "local",
"id": 1,
"createdAt": "2016-07-28T19:39:09.349Z",
"updatedAt": "2016-07-28T19:39:09.360Z"
},
"question": {
"createdBy": 1,
"createdAt": "2016-07-28T19:41:33.152Z",
"template": "default",
"lang": "en_US",
"name": "My Question",
"content": "Cool stuff, huh?",
"updatedBy": 1,
"updatedAt": "2016-07-28T19:45:02.893Z",
"id": "579a5ff83af4445c179bd8a9"
},
"createdAt": "2016-07-28T19:44:31.516Z",
"template": "default",
"lang": "en_US",
"name": "Ref1",
"link": "Google",
"priority": 1,
"updatedAt": "2016-07-28T19:45:02.952Z",
"id": "579a60ab5c8592c01f946cb5"
}
]
This immediately becomes problematic in any real world context if I decide to load 10, 20, 30, or more records at once, I and end up loading 50 times the data I needed. More bandwidth is used up, slower load times, etc.
How I solved this:
Create custom controller action (for example, 'findPaths')
in contributor/controllers/contributor.js
module.exports = {
findPaths: async ctx => {
const result = await strapi
.query('contributor')
.model.fetchAll({ columns: ['slug'] }) // here we wait for one column only
ctx.send(result);
}
}
Add custom route (for example 'paths')
in contributor/config/routes.json
{
"method": "GET",
"path": "/contributors/paths",
"handler": "contributor.findPaths",
"config": {
"policies": []
}
},
Add permission in admin panel for Contributor entity, path action
That's it. Now it shows only slug field from all contributor's records.
http://your-host:1337/contributors/paths
Here is how you can return specific fields and also exclude the relations to optimize the response.
async list (ctx) {
const result = await strapi.query('article').model.query(qb => {
qb.select('id', 'title', 'link', 'content');
}).fetchAll({
withRelated: []
}).catch(e => {
console.error(e)
});
if(result) {
ctx.send(result);
} else {
ctx.send({"statusCode": 404, "error": "Not Found", "message": "Not Found"});
}
}
I know this is old thread but I just run into exactly same problem and I could not find any solution. Nothing in the docs or anywhere else.
After a few minutes of console logging and playing with service I was able to filter my fields using following piece of code:
const q = Post
.find()
.sort(filters.sort)
.skip(filters.start)
.limit(filters.limit)
.populate(populate);
return filterFields(q, ['title', 'content']);
where filterFields is following function:
function filterFields(q, fields) {
q._fields = fields;
return q;
}
It is kinda dirty solution and I haven't figured out how to apply this to included relation entites yet but I hope it could help somebody looking for solution of this problem.
I'm not sure why strapi does not support this since it is clearly capable of filtering the fields when they are explicitly set. it would be nice to use it like this:
return Post
.find()
.fields(['title', 'content'])
.sort(filters.sort)
.skip(filters.start)
.limit(filters.limit)
.populate(populate);
It would be better to have the query select the fields rather than relying on node to remove content. However, I have found this to be useful in some situations and thought I would share. The strapi sanitizeEntity function can include extra options, one of which allows you only include fields you need. Similar to what manually deleting the fields but a more reusable function to do so.
const { sanitizeEntity } = require('strapi-utils');
let entities = await strapi.query('posts').find({ parent: parent.id })
return entities.map(entity => {
return sanitizeEntity(entity, {
model: strapi.models['posts'],
includeFields: ['id', 'name', 'title', 'type', 'parent', 'userType']
});
});
This feature is not implemented in Strapi yet. To compensate, the best option for you is probably to use GraphQL (http://strapi.io/documentation/graphql).
Feel free to create an issue or to submit a pull request: https://github.com/wistityhq/strapi
You can use the select function if you are using MongoDB Database:
await strapi.query('game-category').model.find().select(["Code"])
As you can see, I have a model called game-category and I just need the "Code" field so I used the Select function.
In the current strapi version (3.x, not sure about previous ones) this can be achieved using the select method in custom queries, regardless of which ORM is being used.
SQL example:
const restaurant = await strapi
.query('restaurant')
.model.query((qb) => {
qb.where('id', 1);
qb.select('name');
})
.fetch();
not very beautiful,but you can delete it before return.
ref here:
https://strapi.io/documentation/developer-docs/latest/guides/custom-data-response.html#apply-our-changes
const { sanitizeEntity } = require('strapi-utils');
module.exports = {
async find(ctx) {
let entities;
if (ctx.query._q) {
entities = await strapi.services.restaurant.search(ctx.query);
} else {
entities = await strapi.services.restaurant.find(ctx.query);
}
return entities.map(entity => {
const restaurant = sanitizeEntity(entity, {
model: strapi.models.restaurant,
});
if (restaurant.chef && restaurant.chef.email) {
**delete restaurant.chef.email;**
}
return restaurant;
});
},
};
yeah,I remember another way.
you can use the attribute in xx.settings.json file.
ref:
model-options
{
"options": {
"timestamps": true,
"privateAttributes": ["id", "created_at"], <-this is fields you dont want to return
"populateCreatorFields": true <- this is the system fields,set false to not return
}
}
You can override the default strapi entity response of:-
entity = await strapi.services.weeklyplans.create(add_plan);
return sanitizeEntity(entity, { model: strapi.models.weeklyplans });
By using:-
ctx.response.body = {
status: "your API status",
message: "Your own message"
}
Using ctx object, we can choose the fields we wanted to display as object.
And no need to return anything. Place the ctx.response.body where the response has to be sent when the condition fulfilled.
It is now 2023, and for a little while it has been possible to do this using the fields parameter:
http://localhost:1337/api/reference?fields[0]=name&fields[1]=something

Resources