Parse-Server: How can I create a log file on the outside? - heroku

I use Parse-Server on Heroku.
I can see server log via parse dashboard. However, it disappears at deployment time.
Is it possible to write logs to external storage?

You can access to your logs like that: heroku logs -n 1500
If you want more than 1500 log lines, you should use an addon or Log Drains
So you can redirect your log to a file or an external storage, for example you can write:
heroku logs -n 1500 --app application_name >> file_logs.txt
There also use FileLoggerAdapter from parse you can use it like that:
fileLogger.info('info content', {} => {...});
fileLogger.info('error content', {} => {...});
fileLogger.query({
level: 'error',
size: 10,
from: Date.now() - (30 * 24 * 60 * 60 * 1000),
until: Date.now(),
order: 'desc'
}, {} => {...})

Related

Run specific part of Cypress test multiple times (not whole test)

Is it possible to run specific part of the test in Cypress over and over again without execution of whole test case? I got error in the second part of test case and first half of it takes 100s. It means I have to wait 100s every time to get to the point where the error occurs. I would like rerun test case just few steps before error occurs. So once again, my question is: Is it possible to do in Cypress? Thanks
Workaround #1
If you are using cucumber in cypress you can modify your scenario to a Scenario Outline that will execute Nth times with a scenario tag:
#runMe
Scenario Outline: Visit Google Page
Given that google page is displayed
Examples:
| nthRun |
| 1 |
| 2 |
| 3 |
| 4 |
| 100 |
After that run the test in the terminal by running through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
Reference: https://www.npmjs.com/package/cypress-cucumber-preprocessor?activeTab=versions#running-tagged-tests
Workaround #2
Cypress does have retry capability but it would only retry the scenario during failure. You can force your scenario to fail to retry it Nth times with a scenario tag:
In your cypress.json add the following configuration:
{
"retries": {
// Configure retry attempts for `cypress run`
// Default is 0
"runMode": 99,
// Configure retry attempts for `cypress open`
// Default is 0
"openMode": 99
}
}
Reference: https://docs.cypress.io/guides/guides/test-retries#How-It-Works
Next is In your feature file, add an unknown step definition on the last step of your scenario to make it fail:
#runMe
Scenario: Visit Google Page
Given that google page is displayed
And I am an uknown step
Then run the test through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
For a solution that doesn't require adding a change to the config file, you can pass retries as a param to specific tests that are known to be flakey for acceptable reasons.
https://docs.cypress.io/guides/guides/test-retries#Custom-Configurations
Meaning you can write (from docs)
describe('User bank accounts', {
retries: {
runMode: 2,
openMode: 1,
}
}, () => {
// The per-suite configuration is applied to each test
// If a test fails, it will be retried
it('allows a user to view their transactions', () => {
// ...
}
it('allows a user to edit their transactions', () => {
// ...
}
})```

FUNCTION_REGION env variable in Nodejs is differenrent than GCP set automatically for logs

I programmatically write the logs from the function using such code:
import {Logging} from '#google-cloud/logging';
const logging = new Logging();
const log = logging.log('log-name');
const metadata = {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
};
log.write(
log.entry(metadata, "some message")
);
Later in Logs Explorer I get the log message where labels.region is us1 whereas standard logs that GCP adds, e.g. "Function execution started", contains us-central1 value.
Should not they be the same? Maybe I missed something or if it was done intentionally what is the reason behind it?
process.env.FUNCTION_REGION is supported only in Node 8 runtime. In newer runtimes it was deprecated. More info in documentation.
If your function requires one of the environment variables from an older runtime, you can set the variable when deploying your function.

Listening to remote AWS SQS from local using serverless

I want to execute the lambda function locally , on SQS event which is on my AWS account. I have defined the required event but this not getting triggered.
How can this be achieved?
I am able to send the messages to the same queue using cron event from my local.
Here are few things I tried... but didnt work for me .
functions:
account-data-delta-test:
handler: functions/test/data/dataDeltatestGenerator.handler
name: ${self:provider.stage}-account-data-delta-test
description: account delta update - ${self:provider.stage}-account-data-delta-test
tags:
Name: ${self:provider.stage}-account-data-delta-test
# keeping 5 minute function timeout just in case large volume of data.
timeout: 300
events:
- sqs:
arn:
Fn::GetAtt: [ testGenerationQueue, Arn ]
batchSize: 10
----------
Policies:
- PolicyName: ${self:provider.stage}-test-sqs-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
- sqs:ChangeMessageVisibility
- sqs:SendMessage
- sqs:GetQueueUrl
- sqs:ListQueues
Resource: "*"
---------------
---
Resources:
testGenerationQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:provider.stage}-account-test-queue
VisibilityTimeout: 60
Tags:
-
Key: Name
Value: ${self:provider.stage}-account-test-queue
-------------
const sqs = new AWS.SQS({
region: process.env.REGION,
});
exports.handler = async (event) => {
console.error('------------ >>>>CRON:START: Test delta Job run.', event);
log.error('------------ >>>>CRON:START: Test delta Job run.', event);
});
You can't trigger your local Lambda function from your remote context because they haven't nothin in common.
I suppose your goal is to test the logic of Lambda function, if so you have two options.
Option 1
A faster way could be invoke function locally using sam local invoke. In this way, you could provide this command some argument, one of those arguments is the event source (i.e. the event information that SQS will send to the Lambda as soon this is triggered).
sam local invoke -e sqs.input.json account-data-delta-test
and your sqs.input.json would look like this (generate using sam local generate-event sqs receive-message)
so you will actually test your Lambda locally.
Pros: is fast
Cons: You still have to test the trigger when you will deploy on AWS
Option 2
In a second scenario you will sacrifice the bind between a queue and Lambda. You have to trigger your function at fix interval and explicitly use the ReceiveMessage in your code.
Pro: you can read a real message from a real queue.
Con: you have to invoke function at regular interval and this is not handy.

Logstash can`t handle all input plugins

I use Logstash input http_handler to collect metrics from different endpoints.
For each endpoint I have separate config file with "input" plugin like:
input { http_poller {urls => { server_1 => { url => 'http://10.200.3.1:8809/metrics' } } request_timeout => 5 tags => 'TL.QA.proxy-service' interval => 60 metadata_target => 'http_poller_metadata' type => 'tl_qa_http_metrics'}}
I have ~1000 such files in one directory.
When I start Logstash I specify directory to read all those files, like:
./bin/logstash -f /opt/logstash-5.6.2/configs/
When I had small amount of files (~100) it works pretty good. But now looks like Logstash doesn't have enough time to read all files and it doesn't collect data from all endpoints.
Can you please advise how I can improve it?

Laravel 5.0, env() returns null during concurrent requests

The problem is that when I try to get a config variable using env('setting') or \Config::get('setting'), sometimes it returns null.
For the testing reason I created a simple route:
Route::get('/test', function () {
$env = env('SETTING');
if (!$env) {
\Log::warning('test', [$env]);
}
});
Then I used apache benchmark. And the results were like this:
Calling only one request at a time (ab -n 100 -c 1 http://localhost/test) there were no problem, no records in the log file
Calling with 10 concurrent requests (ab -n 100 -c 10 http://localhost/test) I got about 20 lines like this: [2015-06-22 14:19:48] local.WARNING: test [null]
Does anybody know, what can be the problem? Is there something missing in my configuration or in php settings?
This is a know bug in dotenv package - see the discussion here
https://github.com/laravel/framework/issues/8191
This happen to me as well. My workaround is in your config/app.php you have to add this:
'setting' => env('SETTING'),
Then when you want to get the setting config you have to do this:
$env = config('app.setting');

Resources