My Parse app has a GiftCode collection which disallows the find operation at the class-level.
I am writing a beforeSave cloud function that prevents duplicate codes from being entered by our team from Parse's dashboard:
Parse.Cloud.beforeSave('GiftCode', function (req, res) {
Parse.Cloud.useMasterKey();
const code = req.object.get('code');
if (!code) {
res.success();
} else {
const finalCode = code.toUpperCase().trim();
req.object.set('code', finalCode);
(new Parse.Query('GiftCode'))
.equalTo('code', finalCode)
.first()
.then((gift) => {
if (!gift) {
res.success();
} else {
res.error(`GiftCode with code=${finalCode} already exists (objectId=${gift.id})`);
}
}, (err) => {
console.error(err);
res.error(err);
});
}
});
As you can see, I am calling Parse.Cloud.useMasterKey() (and this is running in the Parse cloud), but I am still getting the following error:
This user is not allowed to perform the find operation on GiftCode.
I use useMasterKey() in other normal cloud functions and am able to perform find operations as needed.
Is useMasterKey() not applicable to beforeSave functions?
I've never tried to use the master key in a beforeSave function but I wouldn't be surprised if there's some extra safeguards in place to prevent it. From a security standpoint, it seems like it could make all write-based CLPs and ACLs worthless for that class.
Try selectively using the master key by passing it as an option to the query like so
(new Parse.Query('GiftCode'))
.equalTo('code', finalCode)
.first({ useMasterKey: true })
.then((gift) => {
...
Parse.Cloud.useMasterKey(); has been deprecated in Parse Server version 2.3.0 (Dec 7, 2016). From that version on, it is a no-op (it does nothing). You should now insert the {useMasterKey:true} optional parameter to each of the methods that need to override the ACL or CLP in your code.
Related
MERN Stack application with Login and Register working properly.
On opening dashboard ("/" path), it dispatches "getWallets" 3 times, instead of 1:
Dashboard.jsx :
useEffect(() => {
if (isError) {
console.log(message)
}
if (!user) {
navigate("/login")
}
else {
dispatch(getWallets())
}
return () => {
dispatch(reset())
}
}, [user, navigate, isError, message, dispatch])
It also dispatches "getWalletData" 9 times instead of one (since I only have 1 wallet atm).
TestWalletsData.jsx (component inserted on Dashboard.jsx):
useEffect(() => {
if (isError) {
console.log(message)
}
if (wallets.length > 0 && walletsData.length <= wallets.length) {
wallets.forEach(wallet => {
dispatch(getWalletData(wallet))
dispatch(reset())
})
}
return () => {
dispatch(reset())
}
}, [wallets, wallets.length, walletsData, isError, message, dispatch])
At this point the application is running ok since I don't permit another object to get pushed to the state if it's already there , but since I'm using a limited rate API to get wallets data this isn't the road I want to path.
I'm assuming the issue arrives with the re-rendering of components and the wrong use of useEffect, but I just don't know how to fix it.
I've tried setting the environment to production as suggested by this comment but everything stays the same. https://stackoverflow.com/a/72301433/14399239
PS: Never worked with React Redux or Redux for that matter before trying it out on this project. Tried to follow a tutorial logic and apply it on my use case but having serious difficulties.
EDIT
Managed to solve the issue by removing the React.StrictMode !
Im trying to unit test a Service that uses elastic search. I want to make sure I am using the right techniques.
I am new user to many areas of this problem, so most of my attempts have been from reading other problems similar to this and trying out the ones that make sense in my use case. I believe I am missing a field within the createTestingModule. Also sometimes I see providers: [Service] and others components: [Service].
const module: TestingModule = await Test.createTestingModule({
providers: [PoolJobService],
}).compile()
This is the current error I have:
Nest can't resolve dependencies of the PoolJobService (?).
Please make sure that the argument at index [0]
is available in the _RootTestModule context.
Here is my code:
PoolJobService
import { Injectable } from '#nestjs/common'
import { ElasticSearchService } from '../ElasticSearch/ElasticSearchService'
#Injectable()
export class PoolJobService {
constructor(private readonly esService: ElasticSearchService) {}
async getPoolJobs() {
return this.esService.getElasticSearchData('pool/job')
}
}
PoolJobService.spec.ts
import { Test, TestingModule } from '#nestjs/testing'
import { PoolJobService } from './PoolJobService'
describe('PoolJobService', () => {
let poolJobService: PoolJobService
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [PoolJobService],
}).compile()
poolJobService = module.get<PoolJobService>(PoolJobService)
})
it('should be defined', () => {
expect(poolJobService).toBeDefined()
})
I could also use some insight on this, but haven't been able to properly test this because of the current issue
it('should return all PoolJobs', async () => {
jest
.spyOn(poolJobService, 'getPoolJobs')
.mockImplementation(() => Promise.resolve([]))
expect(await poolJobService.getPoolJobs()).resolves.toEqual([])
})
})
First off, you're correct about using providers. Components is an Angular specific thing that does not exist in Nest. The closest thing we have are controllers.
What you should be doing for a unit test is testing what the return of a single function is without digging deeper into the code base itself. In the example you've provided you would want to mock out your ElasticSearchServices with a jest.mock and assert the return of the PoolJobService method.
Nest provides a very nice way for us to do this with Test.createTestingModule as you've already pointed out. Your solution would look similar to the following:
PoolJobService.spec.ts
import { Test, TestingModule } from '#nestjs/testing'
import { PoolJobService } from './PoolJobService'
import { ElasticSearchService } from '../ElasticSearch/ElasticSearchService'
describe('PoolJobService', () => {
let poolJobService: PoolJobService
let elasticService: ElasticSearchService // this line is optional, but I find it useful when overriding mocking functionality
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
PoolJobService,
{
provide: ElasticSearchService,
useValue: {
getElasticSearchData: jest.fn()
}
}
],
}).compile()
poolJobService = module.get<PoolJobService>(PoolJobService)
elasticService = module.get<ElasticSearchService>(ElasticSearchService)
})
it('should be defined', () => {
expect(poolJobService).toBeDefined()
})
it('should give the expected return', async () => {
elasticService.getElasticSearchData = jest.fn().mockReturnValue({data: 'your object here'})
const poolJobs = await poolJobService.getPoolJobs()
expect(poolJobs).toEqual({data: 'your object here'})
})
You could achieve the same functionality with a jest.spy instead of a mock, but that is up to you on how you want to implement the functionality.
As a basic rule, whatever is in your constructor, you will need to mock it, and as long as you mock it, whatever is in the mocked object's constructor can be ignored. Happy testing!
EDIT 6/27/2019
About why we mock ElasticSearchService: A unit test is designed to test a specific segment of code and not make interactions with code outside of the tested function. In this case, we are testing the function getPoolJobs of the PoolJobService class. This means that we don't really need to go all out and connect to a database or external server as this could make our tests slow/prone to breaking if the server is down/modify data we don't want to modify. Instead, we mock out the external dependencies (ElasticSearchService) to return a value that we can control (in theory this will look very similar to real data, but for the context of this question I made it a string). Then we test that getPoolJobs returns the value that ElasticSearchService's getElasticSearchData function returns, as that is the functionality of this function.
This seems rather trivial in this case and may not seem useful, but when there starts to be business logic after the external call then it becomes clear why we would want to mock. Say that we have some sort of data transformation to make the string uppercase before we return from the getPoolJobs method
export class PoolJobService {
constructor(private readonly elasticSearchService: ElasticSearchService) {}
getPoolJobs(data: any): string {
const returnData = this.elasticSearchService.getElasticSearchData(data);
return returnData.toUpperCase();
}
}
From here in the test we can tell getElasticSearchData what to return and easily assert that getPoolJobs does it's necessary logic (asserting that the string really is upperCased) without worrying about the logic inside getElasticSearchData or about making any network calls. For a function that does nothing but return another functions output, it does feel a little bit like cheating on your tests, but in reality you aren't. You're following the testing patterns used by most others in the community.
When you move on to integration and e2e tests, then you'll want to have your external callouts and make sure that your search query is returning what you expect, but that is outside the scope of unit testing.
I'm using apollo client in an exponent react native app and have noticed that the graphql options method gets run 11 times, why is that? Is that an error or a performance problem? Is that normal? Is it running the query 11 times as well?
...
#graphql(getEventGql,{
options: ({route}) => {
console.log('why does this log 11 times', route.params);
return {
variables: {
eventId: route.params.eventId,
}
}
},
})
#graphql(joinEventGql)
#connect((state) => ({ user: state.user }))
export default class EventDetailScreen extends Component {
...
Looking at the sample from the documentation http://dev.apollodata.com/react/queries.html
Typically, variables to the query will be configured by the props of
the wrapper component; where ever the component is used in your
application, the caller would pass arguments. So options can be a
function that takes the props of the outer component (ownProps by
convention):
// The caller could do something like:
<ProfileWithData avatarSize={300} />
// And our HOC could look like:
const ProfileWithData = graphql(CurrentUserForLayout, {
options: ({ avatarSize }) => ({ variables: { avatarSize } }),
})(Profile);
By default, graphql will attempt to pick up any missing variables from
the query from ownProps. So in our example above, we could have used
the simpler ProfileWithData = graphql(CurrentUserForLayout)(Profile);.
However, if you need to change the name of a variable, or compute the
value (or just want to be more explicit about things), the options
function is the place to do it.
Should you remove the console.log() calls before deploying a React Native app to the stores? Are there some performance or other issues that exist if the console.log() calls are kept in the code?
Is there a way to remove the logs with some task runner (in a similar fashion to web-related task runners like Grunt or Gulp)? We still want them during our development/debugging/testing phase but not on production.
Well, you can always do something like:
if (!__DEV__) {
console.log = () => {};
}
So every console.log would be invalidated as soon as __DEV__ is not true.
Babel transpiler can remove console statements for you with the following plugin:
npm i babel-plugin-transform-remove-console --save-dev
Edit .babelrc:
{
"env": {
"production": {
"plugins": ["transform-remove-console"]
}
}
}
And console statements are stripped out of your code.
source: https://hashnode.com/post/remove-consolelog-statements-in-production-in-react-react-native-apps-cj2rx8yj7003s2253er5a9ovw
believe best practice is to wrap your debug code in statements such as...
if(__DEV__){
console.log();
}
This way, it only runs when you're running within the packager or emulator. More info here...
https://facebook.github.io/react-native/docs/performance#using-consolelog-statements
I know this question has already been answered, but just wanted to add my own two-bits. Returning null instead of {} is marginally faster since we don't need to create and allocate an empty object in memory.
if (!__DEV__)
{
console.log = () => null
}
This is obviously extremely minimal but you can see the results below
// return empty object
console.log = () => {}
console.time()
for (var i=0; i<1000000; i++) console.log()
console.timeEnd()
// returning null
console.log = () => null
console.time()
for (var i=0; i<1000000; i++) console.log()
console.timeEnd()
Although it is more pronounced when tested elsewhere:
Honestly, in the real world this probably will have no significant benefit just thought I would share.
I tried it using babel-plugin-transform-remove-console but the above solutions didn't work for me .
If someone's also trying to do it using babel-plugin-transform-remove-console can use this one.
npm i babel-plugin-transform-remove-console --save-dev
Edit babel.config.js
module.exports = (api) => {
const babelEnv = api.env();
const plugins = [];
if (babelEnv !== 'development') {
plugins.push(['transform-remove-console']);
}
return {
presets: ['module:metro-react-native-babel-preset'],
plugins,
};
};
I have found the following to be a good option as there is no need to log even if __DEV__ === true, if you are not also remote debugging.
In fact I have found certain versions of RN/JavaScriptCore/etc to come to a near halt when logging (even just strings) which is not the case with Chrome's V8 engine.
// only true if remote debugging
const debuggingIsEnabled = (typeof atob !== 'undefined');
if (!debuggingIsEnabled) {
console.log = () => {};
}
Check if in remote JS debugging is enabled
Using Sentry for tracking exceptions automatically disables console.log in production, but also uses it for tracking logs from device. So you can see latest logs in sentry exception details (breadcrumbs).
I have an AWS Lambda function that is scheduled to run once an hour (as described here http://docs.aws.amazon.com/lambda/latest/dg/getting-started-scheduled-events.html).
The function ftps files from a data provider and copies them to S3.
I have a test environment, and a production environment. For each environment, the ftp address and credentials are different.
How can I configure the lambda function so it can be aware of which environment it's running in, and get the ftp config accordingly?
PS: I am aware of this question: runtime configuration for AWS Lambda function , but it did not help me because I am using a scheduled lamdba using the new scheduled lambda functions feature introduced on Oct 8th 2015, and I cannot see a way to get configuration into the event.
I found a way. For the test version of the function, I am calling it TEST-CopyFtpFilesToS3 and for the production version of the function I am naming the function PRODUCTION-CopyFtpFilesToS3. This allows me pull out the environment name using a regular expression.
Then I am storing config/test.json and config/production.json in the zip file that I upload as code for the function. This zip file will be extracted into the directory process.env.LAMBDA_TASK_ROOT when the function runs. So I can load that file and get the config I need.
Some people don't like storing the config in the code zip file, which is fine - you can just load a file from S3 or use whatever strategy you like.
Code for reading the file from the zip:
const readConfiguration = () => {
return new Promise((resolve, reject) => {
let environment = /^(.*?)-.*/.exec(process.env.AWS_LAMBDA_FUNCTION_NAME)[1].toLowerCase();
console.log(`environment is ${environment}`);
fs.readFile(`${process.env.LAMBDA_TASK_ROOT}/config/${environment}.json`, 'utf8', function (err,data) {
if (err) {
reject(err);
} else {
var config = JSON.parse(data);
console.log(`configuration is ${data}`);
resolve(config);
}
});
});
};