I'm trying to use gomplate as a generator of configuration. The problem I'm facing now is having multiple mutations and environments where the application needs to be configured in a different way. I'd like to achieve some user-friendly and readable way with the least possible repetitions in the template and source data.
The motivation behind this is to have generated source data app_config which can be used in a following gomplate as following:
feature_a={{ index (datasource "app_config").features.feature_a .Env.APP_MUTATION .Env.ENV_NAME | required }}
feature_b={{ index (datasource "app_config").features.feature_b .Env.APP_MUTATION .Env.ENV_NAME | required }}
Basically I'd like to have this source data
features:
feature_a:
~: true
feature_b:
mut_a:
~: false
dev: true
test: true
mut_b:
~: true
converted into this result (used as app_config gomplate datasource)
features:
feature_a:
mut_a:
dev: true
test: true
load: true
staging: true
prod: true
mut_b:
dev: true
test: true
load: true
staging: true
prod: true
feature_b:
mut_a:
dev: true
test: true
load: false
staging: false
prod: false
mut_b:
dev: true
test: true
load: true
staging: true
prod: true
given that datasource platform is defined as
mutations:
- mut_a
- mut_b
environments:
- dev
- test
- load
- staging
- prod
I chose to use the ~ to state that every environment or mutation that is not defined will get the value behind ~.
This should work under assumption that the lowest level is environment and the level before the lowest is mutation. Unless environments are not defined, in that case mutation level is lowest and applies for all mutations and environments. However I know this brings extra complexity, so I'm wiling to use simplified variant where mutations are always defined:
features:
feature_a:
mut_a: true
mut_b: true
feature_b:
mut_a:
~: false
dev: true
test: true
mut_b:
~: true
However, since I'm fairly new to gomplate, I'm not sure whether it is the right tool for the job.
I welcome every feedback.
After further investigation I decided that this issue will be better solved with separate tool.
I am going to create end to end(e2e) test using protractor with jasmine and angular 6. I have written some test cases almost 10 cases. That's all working fine, but always some cases become fails. And its failed because of jasmine timeout. I have configure timeout value like below. But I am not getting consistant result. sometimes a test cases is success but at next run it will goes to success or fail. I have searched on google but I have not found any useful solution.
I have defined some common properties for wait
waitForElement(element: ElementFinder){
browser.waitForAngularEnabled(false);
browser.wait(() => element.isPresent(), 100000, 'timeout: ');
}
waitForUrl(url: string){
browser.wait(() => protractor.ExpectedConditions.urlContains(url), 100000, 'timeout')
}
And protractor.conf.js file I have defined that
jasmineNodeOpts: {
showColors: true,
includeStackTrace: true,
defaultTimeoutInterval: 20000,
print: function () {
}
}
I am getting below error
- Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
- Failed: stale element reference: element is not attached to the page document
(Session info: chrome=76.0.3809.100)
(Driver info: chromedriver=76.0.3809.12 (220b19a666554bdcac56dff9ffd44c300842c933-refs/branch-heads/3809#{#83}),platform=Windows NT 10.0.17134 x86_64)
I have got the solution:
I have configured waiting timeout 100000 ms for individual element find where whole script timeout was 20000 ms. So I have follow below process:
Keep full spec timeout below than sum of all elements find timeouts. I have configured defaultTimeoutInterval at jasmineNodeOpts greater than sum of value for all test cases timeout. And then add a large value to allScriptsTimeout: 2000000 inside of export.config. Its resolved my problem.
NB: I gave this answer because I think it may help others who will face this kind of problem.
I got the serverless-plugin-warmup 4.2.0-rc.1 working fine with serverless version 1.36.2
But it only executes with one single warmup call instead of the configured five.
Is there any problem in my serverless.yml config?
It is also strange that I have to add 'warmup: true' to the function section to get the function warmed up. According to the docs on https://github.com/FidelLimited/serverless-plugin-warmup the config at custom section should be enough.
plugins:
- serverless-prune-plugin
- serverless-plugin-warmup
custom:
warmup:
enabled: true
concurrency: 5
prewarm: true
schedule: rate(2 minutes)
source: { "type": "keepLambdaWarm" }
timeout: 60
functions:
myFunction:
name: ${self:service}-${opt:stage}-${opt:version}
handler: myHandler
environment:
FUNCTION_NAME: myFunction
warmup: true
in AWS Cloud Watch I only see one execution every 2 minutes. I would expect to see 5 executions every 2 minutes, or do I misunderstand something here?
EDIT:
Now using the master branch concurrency works but now the context that is deliverd to the function which should be warmed is broken: Using Spring Cloud Functions => "Error parsing Client Context as JSON"
Looking at the JS of the generated warmup function the delivered source looks not ok =>
const functions = [{"name":"myFunction","config":{"enabled":true,"source":"\"\\\"{\\\\\\\"source\\\\\\\":\\\\\\\"serverless-plugin-warmup\\\\\\\"}\\\"\"","concurrency":3}}];
Config is:
custom:
warmup:
enabled: true
concurrency: 3
prewarm: true
schedule: rate(5 minutes)
timeout: 60
Added Property sourceRaw: true to warmup config which generates a clean source in the Function JS.
const functions = [{"name":"myFunctionName","config":{"enabled":true,"source":"{\"type\":\"keepLambdaWarm\"}","concurrency":3}}];
Config:
custom:
warmup:
enabled: true
concurrency: 3
prewarm: true
schedule: rate(5 minutes)
source: { "type": "keepLambdaWarm" }
sourceRaw: true
timeout: 60
I'm working on an old project with a lot of code. This project uses Webpack 3.8.1 and I'm trying to update to 4.4.1, and it's a real obstacle course!
The main pain is that the projects uses the CommonsChunkPlugin:
new CommonsChunkPlugin({
name: 'common',
minChunks: 3,
chunks: _.without(_.keys(entry), 'ace-iframe', 'custom-theme-ace'),
}),
new CommonsChunkPlugin({
name: 'vendors',
minChunks(module, count) {
return isVendorModule(module) && count >= 2;
},
chunks: _.without(_.keys(entry), 'ace-iframe', 'custom-theme-ace'),
})
I know that Webpack 4 does not provide CommonsChunkPlugin anymore. A big thanks a lot to the below articles, they've saved hours of researches:
https://gist.github.com/sokra/1522d586b8e5c0f5072d7565c2bee693
https://medium.com/webpack/webpack-4-code-splitting-chunk-graph-and-the-splitchunks-optimization-be739a861366
Thanks to these amazing links, I've replaced CommonsChunkPlugin with these lines:
optimization: {
splitChunks: {
cacheGroups: {
vendors: {
priority: 50,
name: 'vendors',
chunks: 'async',
reuseExistingChunk: true,
minChunks: 2,
enforce: true,
test: /node_modules/,
},
common: {
name: 'common',
priority: 10,
chunks: 'async',
reuseExistingChunk: true,
minChunks: 2,
enforce: true,
},
},
},
},
},
Thanks to this config, the application is correctly building, chunks are created and the app is running as expected.
But the building time is really slow: more than 7 minutes!
Funny thing, if I totally remove the whole optimization.splitChunks configuration, the applications still works perfectly, and the building time is still around 7 minutes: it's totally like what I've done in optimization.splitChunks is useless.
I've tried to change the chunks properties: to be honest I don't really understand its role...
If I set them to all, the build is way quicker: around 1 minute.
But unfortunately, the generated files from my entries points are not running well: Webpack seems to wait that the chunks are loaded before executing my own code:
// Code from webpack
function checkDeferredModules() {
var result;
for(var i = 0; i < deferredModules.length; i++) {
var deferredModule = deferredModules[i];
var fulfilled = true;
for(var j = 1; j < deferredModule.length; j++) {
var depId = deferredModule[j];
if(installedChunks[depId] !== 0) fulfilled = false;
}
// If I understand well, Webpack checked that deferred modules are loaded
if(fulfilled) {
// If so, it runs the code of my entry point
deferredModules.splice(i--, 1);
result = __webpack_require__(__webpack_require__.s = deferredModule[0]);
}
}
return result;
}
Please tell me I am not wrong here: Webpack seems to wait deferred modules to be loaded, but it does not run the code which is actually loading them... How am I suppose to make this work?
In brief:
with chunks set to async: all is working well, but building time is not viable (more than 7 minutes)
with chunks set to all: building time is correct (around 1 minute), but my code is not running ¯\_(ツ)_/¯
Sorry for this long post, but if someone can help me to make all of this working with correct building time, it would be perfect.
Or at least helping me to understand how all of this is supposed to work, official documentation is not very helpful :(
Thanks in advance!
EDIT: I've tried to continue with chunks set to async, despite the 7mn building time.
I have 20 entries points, and if I add an import instruction importing jQuery and jQuery-UI in one of them, building time is doubling.
If I add it into 5 files, the build crashes:
<--- Last few GCs --->
[15623:0x103000000] 222145 ms: Mark-sweep 1405.0 (1717.4) ->
1405.2 (1717.4) MB, 671.3 / 0.0 ms allocation failure GC in old space requested [15623:0x103000000] 222807 ms: Mark-sweep 1405.2
(1717.4) -> 1405.0 (1667.9) MB, 662.4 / 0.0 ms last resort GC in old
space requested [15623:0x103000000] 223475 ms: Mark-sweep 1405.0
(1667.9) -> 1405.1 (1645.4) MB, 667.1 / 0.0 ms last resort GC in old
space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x1b6415c25ee1
1: fromString(aka fromString) [buffer.js:~298] [pc=0x1973a88756aa](this=0x1b6462b82311
,string=0x1b642d3fe779 ,encoding=0x1b6462b82311 )
3: from [buffer.js:177] [bytecode=0x1b6488c3b7c1 offset=11](this=0x1b644b936599 ,value=0x1b642d3fe779 ,encodingOrOffset=0x1b6462b82311
>> FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Out of memory... I think setting chunks to async is not the correct way to solve this issue :/
I have imported test results according to https://docs.sonarqube.org/display/SONAR/Generic+Test+Data into SonarQube 6.2.
I can look at the detailed test results in sonar by navigating to the test file and then by clicking menu "Show Measures". The opened page then shows me the correct total number of tests 293 of which 31 failed. The test result details section however only shows 100 test results.
This page seems to get its data through a request like: http://localhost:9000/api/tests/list?testFileId=AVpC5Jod-2ky3xCh908m
with a result of:
{
paging: {
pageIndex: 1,
pageSize: 100,
total: 293
},
tests: [
{
id: "AVpDK1X_-2ky3xCh91QQ",
name: "GuiButton:Type Checks->disabledBackgroundColor",
fileId: "AVpC5Jod-2ky3xCh908m",
fileKey: "org.sonarqube:Scripting-Tests-Publishing:dummytests/ScriptingEngine.Objects.GuiButtonTest.js",
fileName: "dummytests/ScriptingEngine.Objects.GuiButtonTest.js",
status: "OK",
durationInMs: 8
...
}
From this I gather that the page size is set to 100 in the backend. Is there a way to increase it so that I can see all test results?
You can certainly call the web service with a larger page size parameter value, but you cannot change the page size requested by the UI