how can i set the gap limit in cplex? - settings

I would like to set the 30% gap limit. How can I do this using the setting file? I tried to modify the MIP tolerance part setting "relative MIP gap tolerance" to 0,30 but after run going to the engine log part, it doesn't appear that cplex has set this gap. How can I do? Thank you.

In OPL you would write
execute
{
cplex.epgap=0.3;
}
and if you use a .ops file (settings) do not forget to add the .ops file to the run configuration.

Related

How to set trailing stop loss in Tradingview Pine Script v4 strategy?

I'm looking for a solution to set a trailing stop loss inside a strategy script in TradingView's Pine Script version 4 language
Docs provided with the platform does not seem to work
Ideally I'd like to compute trailing stop level when a position is open, plot this level and monitor if price falls below that level
Unfortunately they changed variable saving method and most of docs for older version does not work anymore
Any help?

How to increase max_locks_per_transaction

I've been performing kind of intensive schema dropping and creating over a PostgreSQL server,
ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
I need to increase max_locks_per_transaction but how can i increase it in MAC OSX
you might find ../data/postgresql.conf file, then edit with notepad, set max_locks_per_transaction = 1024
if it looks like # max_locks_per_transaction... you must remove #.
it must look like that:
max_locks_per_transaction = 1024 # min 10
than save it and restart postgresql
It is a setting in your postgresql.conf if you do not know where that file is run SHOW config_file; on an sql prompt/window.
Then when you have modified that file restart postgresql, I don't know how you do that on MacOS a reboot will work of course.

Apache Nifi GetTwitter

I have a simple question, as I am new to NiFi.
I have a GetTwitter processor set up and configured (assuming correctly). I have the Twitter Endpoint set to Sample Endpoint. I run the processor and it runs, but nothing happens. I get no input/output
How do I troubleshoot what it is doing (or in this case not doing)?
A couple things you might look at:
What activity does the processor show? You can look at the metrics to see if anything has been attempted (Tasks/Time) as well as if it succeeded (Out)
Stop the downstream processor temporarily to make any output FlowFiles visible in the connection queue.
Are there errors? Typically these appear in the top-left corner as a yellow icon
Are there related messages in the logs/nifi-app.log file?
It might also help us help you if you describe the GetTwitter Property settings a bit more. Can you share a screenshot (minus keys)?
In my case its because there are two sensitive values set. According to the documentation when a sensitive value is set, the nifi.properties file's nifi.sensitive.props.key value must be set - it is an empty string by default using HortonWorks DataPlatform distribution. I set this to some random string (literally random_STRING but you can use anything) and re-created my process from the template and it began working.
In general I suppose this topic can be debugged by setting the loglevel to DEBUG.
However, in my case the issue was resolved more easily:
I just set up a new cluster, and decided to copy all twitter keys and secrets to notepad first.
It turns out that despite carefully copying the keys from twitter, one of them had a leading tab. When pasting directly into the GetTwitter processer, this would not show, but fortunately it showed up in notepad and I was able to remove it and make this work.

Setting mapred.child.java.opts in Hive script results in MR job getting 'killed' right away

I have been having a few jobs failing due to OutOfMemory and GC overhead limit exceeded errors. To counter the former I tried setting SET mapred.child.java.opts="-Xmx3G"; at the start of the hive script**.
Basically any time I add this option to the script, the MR jobs that get scheduled(for the first of several queries in the script) are 'killed' right away.
Any thoughts on how to rectify this? Are there any other params that need to be tinkered with in conjunction with max heap space(eg. io.sort.mb)? Any help would be most appreciated.
FWIW, I am using hive-0.7.0 with hadoop-0.20.2. The default setting for max heap size in our cluster is 1200M.
TIA.
** - Some other alternatives that were tried(with comical effect but no discernible change in outcome):
SET mapred.child.java.opts="-Xmx3G";
SET mapred.child.java.opts="-server -Xmx3072M";
SET mapred.map.child.java.opts ="-server -Xmx3072M";
SET mapred.reduce.child.java.opts ="-server -Xmx3072M";
SET mapred.child.java.opts="-Xmx2G";
Update 1: It is possible that it's not necessarily anything to do with setting heap size. Tinkering with mapred.child.java.opts in any way is causing the same outcome. For example setting it thusly, SET mapred.child.java.opts="-XX:+UseConcMarkSweepGC"; is having the same result of MR jobs getting killed right away. Or even setting it explicitly in the script to what is the 'cluster default' causes this.
Update 2: Added a pastebin of a grep of JobTracker logs here.
Figured this would end up being something trivial/inane and it was in the end.
Setting mapred.child.java.opts thusly:
SET mapred.child.java.opts="-Xmx4G -XX:+UseConcMarkSweepGC";
is unacceptable. But this seem to go through fine:
SET mapred.child.java.opts=-Xmx4G -XX:+UseConcMarkSweepGC; (minus the double-quotes)
sigh. Having better debug options/error messages would have been nice.
Two other guards can restrict task memory usage. Both are designed for admins to enforce QoS, so if you're not one of the admins on the cluster, you may be unable to change them.
The first is the ulimit, which can be set directly in the node OS, or by setting mapred.child.ulimit.
The second is a pair of cluster-wide mapred.cluster.max.*.memory.mb properties that enforce memory usage by comparing job settings mapred.job.map.memory.mb and mapred.job.reduce.memory.mb against those cluster-wide limits.

Munin Graph - How To Set Max Upper Limit For mysql slowqueries & munin stats?

Wow, this is my very first post on stackoverflow! Been using results for years, but this is the first time I'm 100% stumped and decided to join!
I use Munin to monitor and graph stuff like CPU, Memory, Loads, etc. on my VPS.
Sometimes I get a huge statistical outlier data point that throws my graphs out of whack. I want to set the upper limit for these graphs to simply avoid having these outliers impact the rest of the data view.
After hours of digging and experimenting I was able to change the upper limit on Loads by doing the following:
cd /etc/munin/plugins
pico load
I changed: echo 'graph_args --base 1000 -l 0'
to: echo 'graph_args --base 1000 -l 0 -u 5 --rigid'
It worked perfectly!
Unfortunately I've tried everything to get munin stats processing time and mysql slowqueries to have an upper limit and can't figure it out!
Here is the line in mysql_slowqueries
echo 'graph_args --base 1000 -l 0'
... and for munin_stats
"graph_args --base 1000 -l 0\n",
I've tried every combo of -u and --upper-limit for both of those and nothing I do is impacting the display of the graph to show a max upper limit.
Any ideas on what I need to change those lines to so I can get a fixed upper limit max?
Thanks!
I highly encourage playing with the scripts, even though you run the risk of them being overwritten by an update. Just back them up and replace them if you think it's needed. If you have built or improved things, don't forget to share them with us on github: https://github.com/munin-monitoring/munin
When you set --upper-limit to 100 and your value is 110, your graph will run to 110. If you add --rigid, your graph scale will stay at 100, and the line will be clipped, which is what you wanted in this case.
Your mysql_slowqueries graph line should read something like (it puts a limit on 100):
echo 'graph_args --base 1000 -l 0 --upper-limit 100 --rigid'
Changing the scripts ist highly discouraged since with the next update they might be replaced by the package manager and ando your changes.
Munin gives you different ways to define limits on the settings. One the node itself as well as on the server.
You can find (sort of) an answer in the FAQ.
For me it worked really nicely to just create a file named /etc/munin/plugin-conf.d/load.conf with the following content:
[load]
env.load_warning 5
env.load_critical 10
Restart munin-node to apply the changes and on the next update of the graph you can see theat the "warning" and "critical" levels have been set by clocking on the load-graph in the overview (table below the graphs)

Resources