Collecting Yarn Metrics - yarnpkg

I would like to collect metrics on how long yarn commands such as yarn install take to complete and other metrics such as how often the commands don't complete successfully.
My .yarnrc file looks like this:
registry "https://artifactory-content.company.build/artifactory/api/npm/npm"
"--ignore-engines" true
"#sail:registry" "https://artifactory-content.company.build/artifactory/api/npm/npm-local"
"#company-internal:registry" "https://artifactory-content.company.build/artifactory/api/npm/npm-local"
lastUpdateCheck 1617811239383
yarn-path "./yarn-1.19.1.js"
From what I understand, when I run the yarn command, it invokes the yarn-1.19.1.js file. Would it be possible to create a wrapper such that when any yarn command is run, it logs the metrics at the command level and then executes the yarn-1.19.1.js file?
Another approach I ran across was modifying the package.json file and overriding the commands as mentioned here but this doesn't seem scalable as there might be new commands added later on.

Yes, you could create a wrapper script that invokes the Yarn core library. This is exactly what the Node.js that is yarn does:
#!/usr/bin/env node
'use strict';
var ver = process.versions.node;
var majorVer = parseInt(ver.split('.')[0], 10);
if (majorVer < 4) {
console.error('Node version ' + ver + ' is not supported, please use Node.js 4.0 or higher.');
process.exit(1);
} else {
try {
require(__dirname + '/../lib/v8-compile-cache.js');
} catch (err) {
}
var cli = require(__dirname + '/../lib/cli');
if (!cli.autoRun) {
cli.default().catch(function(error) {
console.error(error.stack || error.message || error);
process.exitCode = 1;
});
}
}
You could just edit that script like so:
const start = Date.now();
var cli = require(__dirname + '/../lib/cli');
if (!cli.autoRun) {
cli.default()
.then(function() {
const durationInMs = Date.now() - start;
// It succeeded, do your stuff.
})
.catch(function(error) {
console.error(error.stack || error.message || error);
process.exitCode = 1;
const durationInMs = Date.now() - start;
// It failed, do your stuff.
});
}

Related

AWS Lambda logging through Serilog UDP sink and logstash silently fails

We have a .NET Core 2.1 AWS Lambda that I'm trying to hook into our existing logging system.
I'm trying to log through Serilog using a UDP sink to our logstash instance for ingestion into our ElasticSearch logging database that is hosted on a private VPC. Running locally through a console logs fine, both to the console itself and through UDP into Elastic. However, when it runs as a lambda, it only logs to the console (i.e CloudWatch), and doesn't output anything indicating that anything is wrong. Possibly because UDP is stateless?
NuGet packages and versions:
Serilog 2.7.1
Serilog.Sinks.Udp 5.0.1
Here is the logging code we're using:
public static void Configure(string udpHost, int udpPort, string environment)
{
var udpFormatter = new JsonFormatter(renderMessage: true);
var loggerConfig = new LoggerConfiguration()
.Enrich.FromLogContext()
.MinimumLevel.Information()
.Enrich.WithProperty("applicationName", Assembly.GetExecutingAssembly().GetName().Name)
.Enrich.WithProperty("applicationVersion", Assembly.GetExecutingAssembly().GetName().Version.ToString())
.Enrich.WithProperty("tags", environment);
loggerConfig
.WriteTo.Console(outputTemplate: "[{Level:u}]: {Message}{N---ewLine}{Exception}")
.WriteTo.Udp(udpHost, udpPort, udpFormatter);
var logger = loggerConfig.CreateLogger();
Serilog.Log.Logger = logger;
Serilog.Debugging.SelfLog.Enable(Console.Error);
}
// this is output in the console from the lambda, but doesn't appear in the Database from the lambda
// when run locally, appears in both
Serilog.Log.Logger.Information("Hello from Serilog!");
...
// at end of lambda
Serilog.Log.CloseAndFlush();
And here is our UDP input on logstash:
udp {
port => 5000
tags => [ 'systest', 'serilog-nested' ]
codec => json
}
Does anyone know how I might go about resolving this? Or even just seeing what specifically is wrong so that I can start to find a solution.
Things tried so far include:
Pinging logstash from the lambda - impossible, lambda doesn't have ICMP
Various things to try and get the UDP sink to output errors, as seen above, various attempts at that. Even putting in a completely fake address yields no error though
Adding the lambda to a VPC where I know logging is possible from
Sleeping around at the end of the lambda. SO that the logs have time to go through before the lambda exits
Checking the logstash logs to see if anything looks odd. It doesn't really. And the fact that local runs get through fine makes me think it's not that.
Using UDP directly. It doesn't seem to reach the server. I'm not sure if that's connectivity issues or just UDP itself from a lambda.
Lots of cursing and swearing
In line with my comment above you can create a log subscription and stream to ES like so, I'm aware that this is NodeJS so it's not quite the right answer but you might be able to figure it out from here:
/* eslint-disable */
// Eslint disabled as this is adapted AWS code.
const zlib = require('zlib')
const { Client } = require('#elastic/elasticsearch')
const elasticsearch = new Client({ ES_CLUSTER_DETAILS })
/**
* This is an example function to stream CloudWatch logs to ElasticSearch.
* #param event
* #param context
* #param callback
*/
export default (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = true
const payload = new Buffer(event.awslogs.data, 'base64')
zlib.gunzip(payload, (err, result) => {
if (err) {
return callback(null, err)
}
const logObject = JSON.parse(result.toString('utf8'))
const elasticsearchBulkData = transform(logObject)
const params = { body: [] }
params.body.push(elasticsearchBulkData)
esClient.bulk(params, (err, resp) => {
if (err) {
callback(null, 'success')
return
}
})
callback(null, 'success')
})
}
function transform(payload) {
if (payload.messageType === 'CONTROL_MESSAGE') {
return null
}
let bulkRequestBody = ''
payload.logEvents.forEach((logEvent) => {
const timestamp = new Date(1 * logEvent.timestamp)
// index name format: cwl-YYYY.MM.DD
const indexName = [
`cwl-${process.env.NODE_ENV}-${timestamp.getUTCFullYear()}`, // year
(`0${timestamp.getUTCMonth() + 1}`).slice(-2), // month
(`0${timestamp.getUTCDate()}`).slice(-2), // day
].join('.')
const source = buildSource(logEvent.message, logEvent.extractedFields)
source['#id'] = logEvent.id
source['#timestamp'] = new Date(1 * logEvent.timestamp).toISOString()
source['#message'] = logEvent.message
source['#owner'] = payload.owner
source['#log_group'] = payload.logGroup
source['#log_stream'] = payload.logStream
const action = { index: {} }
action.index._index = indexName
action.index._type = 'lambdaLogs'
action.index._id = logEvent.id
bulkRequestBody += `${[
JSON.stringify(action),
JSON.stringify(source),
].join('\n')}\n`
})
return bulkRequestBody
}
function buildSource(message, extractedFields) {
if (extractedFields) {
const source = {}
for (const key in extractedFields) {
if (extractedFields.hasOwnProperty(key) && extractedFields[key]) {
const value = extractedFields[key]
if (isNumeric(value)) {
source[key] = 1 * value
continue
}
const jsonSubString = extractJson(value)
if (jsonSubString !== null) {
source[`$${key}`] = JSON.parse(jsonSubString)
}
source[key] = value
}
}
return source
}
const jsonSubString = extractJson(message)
if (jsonSubString !== null) {
return JSON.parse(jsonSubString)
}
return {}
}
function extractJson(message) {
const jsonStart = message.indexOf('{')
if (jsonStart < 0) return null
const jsonSubString = message.substring(jsonStart)
return isValidJson(jsonSubString) ? jsonSubString : null
}
function isValidJson(message) {
try {
JSON.parse(message)
} catch (e) { return false }
return true
}
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n)
}
One of my colleagues helped me get most of the way there, and then I managed to figure out the last bit.
I updated Serilog.Sinks.Udp to 6.0.0
I updated the UDP setup code to use the AddressFamily.InterNetwork specifier, which I don't believe was available in 5.0.1.
I removed enriching our log messages with "tags", since I believe it being present on the UDP endpoint somehow caused some kind of clash and I've seen it stop logging without a trace before.
And voila!
Here's the new logging setup code:
loggerConfig
.WriteTo.Udp(udpHost, udpPort, AddressFamily.InterNetwork, udpFormatter)
.WriteTo.Console(outputTemplate: "[{Level:u}]: {Message}{NewLine}{Exception}");

How can I run a truffle test on an existing contract?

I'm working on a simple ethereum contract and it's truffle test counterpart, but the issue I'm running into is that I need the test to call an old deployment of the contract, instead of redeploying it every time.
In the truffle documentation, it says the contract() function should be used when the contract is to be redeployed and mocha's describe() in all other cases, but even using describe, the geth client reports redeploying the contract evey time.
Here is the test:
var md5 = require('md5');
var AuditRecord = artifacts.require("AuditRecord");
describe('AuditRecord', function() {
before(function() {
audit = AuditRecord.at('0x30ad3ceaf3f04696d1f7c8c4fbb9cfe4f7041822');
for (var i = 0; i < 10; ++i) {
if (Math.random() < 0.3) {
audit.enter(i, i, md5("test"), md5("data"), Date.now().toFixed());
} else {
audit.enter(i, i, md5("special_case"), md5("data"), Date.now().toFixed());
}
}
return audit.LogRecord({}, { fromBlock: 0, toBlock: 'latest'});
});
it("should read data", function() {
auditLog = AuditRecord.at('0x30ad3ceaf3f04696d1f7c8c4fbb9cfe4f7041822').LogRecord({}, { fromBlock: 0, toBlock: 'latest'});
auditLog.get(function(err, data) {
console.log("\n\n\n\t.:::: testing lookup:\n")
if (err) {
console.log(err);
return false;
}
for (obj in data) {
console.log("entry: # " + Date(data[obj].args.timestamp).toString());
console.log("\tuser: " + web3.toAscii(data[obj].args.user));
console.log("\tpatient: " + web3.toAscii(data[obj].args.patient));
console.log("\toperation: " + data[obj].args.operation);
//console.log(JSON.stringify(data[obj].args.));
}
assert(10 < data.length);
});
})
})
The test works in the sense that it finds my previously deployed contract at the hardcoded address, but for some reason it also deploys both the migrations.sol and auditrecord.sol on unrelated addresses every time. My aim is to refer to the same contract every time this test is ran.
Is there a way to achieve this?

Electron open & keep open cmd.exe

I'm building a little tool that should among other thing allow me to start a Tomcat server.
Pretty easy, I just want a button to launch startup.bat and another to call shutdown.bat.
It works quite well (server start and stop) but completely in ninja mode, I can't manage to get the Tomcat console with the logs.
From a classic command line if I call startup.bat, a Window is opened with logs inside.
I tried exec, execFile, spawn. I tried calling directly the bat, the cmd.exe, even tried start. But I can't get the Window.
I know I can get streams, but I would like not to bother with that.
Also I'm just using the tool on Windows, no need to think about other platform for the moment.
const ipc = require('electron').ipcMain;
const execFile = require('child_process').execFile;
ipc.on('start-local-tomcat', function (event) {
execFile('cmd.exe', ['D:\\DEV\\apache-tomcat-8.0.12\\bin\\startup.bat'],
{env: {'CATALINA_HOME': 'D:\\DEV\\apache-tomcat-8.0.12'}},
function (error, stdout, stderr) {
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if (error !== null) {
console.log('exec error: ' + error);
}
})
});
ipc.on('stop-local-tomcat', function (event) {
execFile('cmd.exe',['D:\\DEV\\apache-tomcat-8.0.12\\bin\\shutdown.bat'],
{env: {'CATALINA_HOME': 'D:\\DEV\\apache-tomcat-8.0.12'}},
function (error, stdout, stderr) {
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if (error !== null) {
console.log('exec error: ' + error);
}
})
});
Finally I just didn't read the doc enough, there is a parameter called detached that will do exactly what I want:
var child = spawn(
'D:\\DEV\\apache-tomcat-8.0.12\\bin\\startup.bat',
{
env: {'CATALINA_HOME': 'D:\\DEV\\apache-tomcat-8.0.12'},
detached: true
}
);

How to send CONTROL+C at spawn in nodejs

I run CMD to spawn, but if you'll send me a ping command, I can not get out of it, how can I send the console control + c, to avoid this? THANKS!
var fs = require('fs');
var iconv = require('iconv-lite');
function sendData (msg) {
console.log('write msg ', msg);
cmd.stdin.write(msg + "\r\n");
}
function execCommand() {
console.log('start command line')
var s = {
e : 'exec_command',
d : {
data : {}
}
};
cmd = require('child_process').spawn('cmd', ['/K']);
cmd.stdout.on('data', function (data) {
console.log(iconv.decode(data, 'cp866'));
});
}
execCommand();
sendData('ping e1.ru -t');
sendData( EXIT ??? )
?????
I want to make a console, a full-fledged console through node.js.
sendData('dir');
sendData('cd /d Windows');
sendData('ping 8.8.8.8 -t');
senData( CONTROL + C );
senData('dir')
You'll want to explicitly call:
cmd.kill();
that'll do the trick. If you require the equivalent of CTRL-C then call:
cmd.kill('SIGINT');
See child_process.kill docs for more info.

Less CSS and local storage issue

I'm using LESS CSS (more exactly less.js) which seems to exploit LocalStorage under the hood. I had never seen such an error like this before while running my app locally, but now I get "Persistent storage maximum size reached" at every page display, just above the link the unique .less file of my app.
This only happens with Firefox 12.0 so far.
Is there any way to solve this?
P.S.: mainly inspired by Calculating usage of localStorage space, this is what I ended up doing (this is based on Prototype and depends on a custom trivial Logger class, but this should be easily adapted in your context):
"use strict";
var LocalStorageChecker = Class.create({
testDummyKey: "__DUMMY_DATA_KEY__",
maxIterations: 100,
logger: new Logger("LocalStorageChecker"),
analyzeStorage: function() {
var result = false;
if (Modernizr.localstorage && this._isLimitReached()) {
this._clear();
}
return result;
},
_isLimitReached: function() {
var localStorage = window.localStorage;
var count = 0;
var limitIsReached = false;
do {
try {
var previousEntry = localStorage.getItem(this.testDummyKey);
var entry = (previousEntry == null ? "" : previousEntry) + "m";
localStorage.setItem(this.testDummyKey, entry);
}
catch(e) {
this.logger.debug("Limit exceeded after " + count + " iteration(s)");
limitIsReached = true;
}
}
while(!limitIsReached && count++ < this.maxIterations);
localStorage.removeItem(this.testDummyKey);
return limitIsReached;
},
_clear: function() {
try {
var localStorage = window.localStorage;
localStorage.clear();
this.logger.debug("Storage clear successfully performed");
}
catch(e) {
this.logger.error("An error occurred during storage clear: ");
this.logger.error(e);
}
}
});
document.observe("dom:loaded",function() {
var checker = new LocalStorageChecker();
checker.analyzeStorage();
});
P.P.S.: I didn't measure the performance impact on the UI yet, but a decorator could be created and perform the storage test only every X minutes (with the last timestamp of execution in the local storage for instance).
Here is a good resource for the error you are running into.
http://www.sitepoint.com/building-web-pages-with-local-storage/#fbid=5fFWRXrnKjZ
Gives some insight that localstorage only has so much room and you can max it out in each browser. Look into removing some data from localstorage to resolve your problem.
Less.js persistently caches content that is #imported. You can use this script to clear content that is cached. Using the script below you can call the function destroyLessCache('/path/to/css/') and it will clear your localStorage of css files that have been cached.
function destroyLessCache(pathToCss) { // e.g. '/css/' or '/stylesheets/'
if (!window.localStorage || !less || less.env !== 'development') {
return;
}
var host = window.location.host;
var protocol = window.location.protocol;
var keyPrefix = protocol + '//' + host + pathToCss;
for (var key in window.localStorage) {
if (key.indexOf(keyPrefix) === 0) {
delete window.localStorage[key];
}
}
}

Resources