Increase Ganach Gas Limit from Graphic Interface - limit

I am strugguling with Ganache, I have some big tests that I want to run but it is saying :
"X ran out of gas. Something in the constructor (ex: infinite loop) caused gas estimation to fail. Try:
Making your contract constructor more efficient
Setting the gas manually in your config or as a deployment parameter
Using the solc optimizer settings in 'truffle-config.js'
Setting a higher network block limit if you are on a
private network or test client (like ganache)."
I already increased to max gas limit in my file truffle-config.js but it is not enough because limited to 6721975.
I saw some people talking about $ganache -cli -l 30000000 but I dont have ganache in command line.
My question is : How to change this Value ?

Actually, you can't increase the gas limit, but what you can do is create a new workspace and in the initialization parameter set a higher gas limit in the "chain" module.

Please note that, for the latest truffle version, The truffle-config.js is slightly updated in the compilers section, need to add settings attributes for optimizer. Otherwise, the optimizer will not be enabled and you will easily get the following errors:
"YOUR CONTRACT" ran out of gas. Something in the constructor (ex: infinite loop) caused gas estimation to fail. Try:
* Making your contract constructor more efficient
* Setting the gas manually in your config or as a deployment parameter
* Using the solc optimizer settings in 'truffle-config.js'
* Setting a higher network block limit if you are on a
private network or test client (like ganache).
In the following is my truffle-config.js setting for your references:
module.exports = {
// ...Please set your own contracts_directory and contracts_build_directory
networks: {
development: {
host: "127.0.0.1",
port: 8545,
network_id: 5777, //Ganache
}
},
compilers: {
solc: {
version: "0.8.4",
settings: { //need to add "settings" before "optimizer" for latest truffle version
optimizer: {
enabled: true, // enable the optimizer
runs: 200,
},
evmVersion: "berlin",
},
},
},
}

Related

NATS KV history larger than specified when creating bucket

We are using nats with KeyValue store feature (nats KV). We develop go microservices and use the nats go client. We try to leverage the history feature of nats KV with no success yet.
Certain times using nats, we retrieve a larger history than the history specified when creating the KV.
We create the KV using :
kv, _ := js.CreateKeyValue(&nats.KeyValueConfig{
Bucket: "some-bucket",
Description: "store for some-service",
MaxValueSize: 0,
History: 10, // should we ever get more than 10 elements when reading history ?
TTL: TTL,
MaxBytes: 5000000,
Storage: nats.MemoryStorage,
Replicas: 0,
Placement: nil,
})
and we retrieve values using
kv.History("someId")
When we get results larger than the specified History, we get several KeyValueEntrys with the same delta value.
We are quite write intensive, and also reuse quite a lot the same key id :
we write values until a certain point,
call kv.Purge("someId")
and then we may reuse "someId" later on in the process.
Writes and read are asynchronous and concurrent.
Here is our client go.mod regarding nats:
github.com/nats-io/nats-server/v2 v2.8.4
github.com/nats-io/nats.go v1.16.0
and we run a nats server version 2.8.4.
note : I did not go far enough in the KV implementation details but I am worried that this is linked with jetstream. It seems like a watcher is created each time and re-reads all previous values regardless of history size. It leads me to another question : is the kv history feature appropriate for read intensive use cases ?
Thanks for your help or pointers on this matter.

Selecting Quality Gate for SonarQube Analysis in Jenkinsfile

I have a Jenkinsfile that, among other things, performs SonarQube analysis on my build and passes it through 'Quality Gate' stage. The analysis is placed on the SonarQube server where I can see all the details. The relevant pieces of code for the analysis and Quality gate are below (not mine, it is from documentation):
stage('SonarCloud') {
steps {
withSonarQubeEnv('SonarQube') {
sh 'mvn clean package sonar:sonar '
}
}
}
stage("Quality Gate") {
steps {
timeout(time: 15, unit: 'MINUTES') { // If analysis takes longer than indicated time, then build will be aborted
waitForQualityGate abortPipeline: true
script{
def qg = waitForQualityGate() // Waiting for analysis to be completed
if(qg.status != 'OK'){ // If quality gate was not met, then present error
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
}
}
}
Currently, once the analysis is completed and placed on the server, it uses server's default quality gate. I wonder if I can specify which quality gate to use with my analysis, before proceeding to 'Quality gate' stage? (I have another quality gate set up, with different acceptance criteria, that I would like to use for 'Quality Gate' stage).
Altering the default quality gate cannot be done because other people are using it (hence why I have my own quality gate set up).
I have looked into 'ceTaskUrl' link, that can be found in report-task.txt file, but didn't get far with it (no variable that I can see, and use, to select quality gate).
I also found this Jenkinsfile. I tried to use some of its code, with additional googling on top of it, in hopes to access and alter quality gate, but also didn't get far with it.
It is worth mentioning that I do not have admin privileges at the SonarQube server I am using. However, I can request for a new quality gate to be configured as required, in case it is needed.
You can do so using the WebAPI but for that you need Administer Quality Gate permission.
Please find more details here in this answer.
How to assign Quality Gate dynamically to project from the script [SonarQube 6.5]?
Or in case you don’t get the proper permission, then the alternate way is to use sonarqube UI, where you can specify which Quality Gate should be used for which project.

clear prometheus metrics from collector

I'm trying to modify prometheus mesos exporter to expose framework states:
https://github.com/mesos/mesos_exporter/pull/97/files
A bit about mesos exporter - it collects data from both mesos /metrics/snapshot endpoint, and /state endpoint.
The issue with the latter, both with the changes in my PR and with existing metrics reported on slaves, is that metrics created lasts for ever (until exporter is restarted).
So if for example a framework was completed, the metrics reported for this framework will be stale (e.g. it will still show the framework is using CPU).
So I'm trying to figure out how I can clear those stale metrics. If I could just clear the entire mesosStateCollector each time before collect is done it would be awesome.
There is a delete method for the different p8s vectors (e.g. GaugeVec), but in order to delete a metric, I need to not only the label name, but also the label value for the relevant metric.
Ok, so seems it was easier than I thought (if only I was familiar with go-lang before approaching this task).
Just need to cast the collector to GaugeVec and reset it:
prometheus.NewGaugeVec(prometheus.GaugeOpts{
Help: "Total slave CPUs (fractional)",
Namespace: "mesos",
Subsystem: "slave",
Name: "cpus",
}, labels): func(st *state, c prometheus.Collector) {
c.(*prometheus.GaugeVec).Reset() ## <-- added this for each GaugeVec
for _, s := range st.Slaves {
c.(*prometheus.GaugeVec).WithLabelValues(s.PID).Set(s.Total.CPUs)
}
},

Are ElasticSearch scripts safe for concurrency issues?

I'm running a process which updates user documents on ElasticSearch. This process can run on multiple instances on different machines. In case 2 instances will try to run a script to update the same document in the same time, can there be a case that some of the data will be lost because of a race-condition? or that the internal script mechanism is safe (using the version property for optimistic locking or any other way)?
The official ES scripts documentation
Using the version attribute is safe for that kind of jobs.
Do the search with version: true
GET /index/type/_search
{
"version": true
your_query...
}
Then for the update, add a version attribute corresponding to the number returned during the search.
POST /index/type/the_id_to_update/_update?version=3 // <- returned by the search
{
"doc":{
"ok": "name"
}
}
https://www.elastic.co/guide/en/elasticsearch/guide/current/version-control.html

StackExchange.Redis timeout and "No connection is available to service this operation"

I have the following issues in our production environment (Web-Farm - 4 nodes, on top of it Load balancer):
1) Timeout performing HGET key, inst: 3, queue: 29, qu=0, qs=29, qc=0, wr=0/0
at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1699 This happens 3-10 times in a minute
2) No connection is available to service this operation: HGET key at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1666
I tried to implement as Marc suggested (Maybe I interpreted it incorrectly) - better to have fewer connections to Redis than multiple.
I made the following implementation:
public class SeRedisConnection
{
private static ConnectionMultiplexer _redis;
private static readonly object SyncLock = new object();
public static IDatabase GetDatabase()
{
if (_redis == null || !_redis.IsConnected || !_redis.GetDatabase().IsConnected(default(RedisKey)))
{
lock (SyncLock)
{
try
{
var configurationOptions = new ConfigurationOptions
{
AbortOnConnectFail = false
};
configurationOptions.EndPoints.Add(new DnsEndPoint(ConfigurationHelper.CacheServerHost,
ConfigurationHelper.CacheServerHostPort));
_redis = ConnectionMultiplexer.Connect(configurationOptions);
}
catch (Exception ex)
{
IoC.Container.Resolve<IErrorLog>().Error(ex);
return null;
}
}
}
return _redis.GetDatabase();
}
public static void Dispose()
{
_redis.Dispose();
}
}
Actually dispose is not being used right now. Also I have some specifics of the implementation which could cause such behavior (I'm only using hashes):
1. Add, Remove hashes - async
2. Get -sync
Could somebody help me how to avoid this behavior?
Thanks a lot in advance!
SOLVED - Increasing Client connection timeout after evaluating network capabilities.
UPDATE 2: Actually it didn't solve the problem. When cache volume starting to get increased e.g. from 2GB.
Then I saw the same pattern actually these timeouts were happend about every 5 minutes.
And our sites were frozen for some period of time every 5 minutes until fork operation was finished.
Then I found out that there is an option to make a fork (save to disk) every x seconds:
save 900 1
save 300 10
save 60 10000
In my case it was "save 300 10" - save in every 5 minutes if at least 10 updates were happened. Also I found out that "fork" could be very expensive. Commented "save" section resolved the problem at all. We can commented "save" section as we are using only Redis as "cache in memory" - we don't need any persistance.
Here is configuration of our cache servers "Redis 2.4.6" windows port: https://github.com/rgl/redis/downloads
Maybe it has been solved in recent versions of Redis windows port in MSOpentech: http://msopentech.com/blog/2013/04/22/redis-on-windows-stable-and-reliable/
but I haven't tested yet.
Anyway StackExchange.Redis has nothing to do with this issue and it works pretty stable in our production environment, thanks to Marc Gravell.
FINAL UPDATE:
Redis is single-threaded solution - it is ultimately fast but when it comes to the point of releasing the memory (Removing items that are stale or expired) the problems are emerged due to one thread should reclaim the memory (that is not fast operation - whatever algorithm is used) and the same thread should handle GET, SET operations. Of course it happens when we are talking about medium-loaded production environment. Even if you use a cluster with slaves when the memory barrier is reached it will have the same behavior.
It looks like in most cases this exception is a client issue. Previous versions of StackExchange.Redis used Win32 socket directly which sometimes has a negative impact. Probably Asp.net internal routing somehow related to it.
The good news is that StackExchange.Redis's network infra was completely rewritten recently. The last version is 2.0.513. Try it and there is a good chance that your problem will go.

Resources