How to set fee in raw bitcoin transaction using btcutil - go

According to the docs (http://godoc.org/github.com/btcsuite/btcrpcclient) the fee can be set by using
SetTxFee(fee btcutil.Amount) // hard coded0.0006 BTC
I set the fee to 0.0000016 bitcoin/kilobyte and do as follow:
ListUnspent
SetTxFee
CreateRawTransaction
SignRawTransaction
SendRawTransaction
But when i try to send transaction i get
-26: 256: absurdly-high-fee
Is there any other way to set the fee using this library?
Debug.log
ThreadRPCServer method=listunspent
ThreadRPCServer method=settxfee
ThreadRPCServer method=createrawtransaction
ThreadRPCServer method=signrawtransaction
ThreadRPCServer method=sendrawtransaction
Amounts:
amounts := map[btcutil.Address]btcutil.Amount{
destAddress: destAmount,
}
UPDATE
It seems like it tries to send whole sum of the transaction, not the amount i want it to send.
If transaction in to A is 1 BTC and i want to send 0.3 BTC to another address, how to achieve this when setting amounts?

settxfee is not for createrawtransaction command.
if you have one input with 1 BTC and you would send 0.9 BTC so remaining amount is the transaction fee.
if you don't want to set transaction fee for 0.1 BTC you could send 0.09 to change address and leave that 0.01 and it's your transaction fee.

Related

Jmeter delay between two loop count in Loop Controller

I am trying to achieve below use case for load testing via jmeter
1. Search Product
2. Add to cart
3. Do payment
1 user with uid = 1 will perform above mentioned 3 steps every 5min for 1 hour.
total request per user per hour. = 12(5 * 12 = 60) * 3(rpm) = 36(request per hour)
total users(threads) = 1000.
total request per hour = 1000 * 36 = 36000
lets consider 3 request as a single set
I am looking for below things
after every 5min 1 set should be executed
delay between two sets should be of 5 min
can anyone please help me in achieving above scenario?
I have tried with below jmeter tools
thread group (thread = 1000, ramp up = 100 sec, loop count = 1)
loop controller( above 3 request with loop count = 12)
constant timer = 300000 millisecond
thread group (thread = 1000, ramp up = 100 sec, loop count = 1)
loop controller( above 3 request with loop count = 12)
constant throughput timer = 5 rpm
thread group (thread = 1000, ramp up = 100 sec, loop count = infinite, duration = 3600 sec)
above 3 request inside thread group
constant throughput timer = 5 rpm
Also I have tried with random order controller
I am unable to simulate above scenario. What I am getting is first request is getting executed 1000 times, then delay, then second request is getting executed 1000 times, then delay then 3rd request is getting executed 1000 times.
Constant Timer adds a delay before each Sampler in its scope
If you want to introduce a delay between 2 iterations add Flow Control Action sampler and define the desired delay there
Additionally if you want all the users to finish the action - add a Synchronizing Timer and set the number of users to group by to be equal to the number of threads in the Thread Group.
More information on JMeter Timers concept: A Comprehensive Guide to Using JMeter Timers

Optimizing Groovy Performance

I'm working on groovy code perfomance optimization. I've used jvisualvm to connect to running applicaton and gather CPU samples. Samples say that org.codehaus.groovy.reflection.CachedMethod.inkove takes the most CPU time. I don't see any other application methods in samples.
What is the right way to dig into CachedMethod.invoke and understand what code lines really give perfomance penalties?
Thanks.
UPD:
I do use Indy, it didn't help me.
I didn't try to introduce #CompileStatic since I want to find my bottlenecks before rewriting groovy to java.
My problem a bit similar to this thread: Call site caching faster than invokedynamic?
I have a code that dynamically composes groovy script. Script template looks this way:
def evaluateExpression(Map context){
def user = context.user
%s
}
where %s replaced with
user.attr1 == '1' || user.attr2 == '2' || user.attr3 = '3'
There is a set (20 in total) of replacements have taken from Databases.
The code gets replacements from DB, creates GroovyScript and evaluates it.
I suppose the bottleneck is in the script execution. What is the right way to fix it?
So, I've tried various things
groovy-indy, doesn't work
groovy-indy with some code "optimization", doesn't work. BTW, I'started to play around with try/catch and it as a result I made my "hotspot" run 4 times faster. I'm not good at JVM internals, but internet says - try/catch prevents optimizations. I assumed it as a ground truth. Need to g deeper to understand who it really works.
I gave up, turned off invokedynamic and rewrote my "hottest" code with #CompileStatic. It took about 3-4 hours and I my code runs 100 time faster now.
Here are initial metrics with "invokedynamic support"
count = 83043
mean rate = 395.52 calls/second
1-minute rate = 555.30 calls/second
5-minute rate = 217.78 calls/second
15-minute rate = 82.92 calls/second
min = 0.29 milliseconds
max = 12.98 milliseconds
mean = 1.59 milliseconds
stddev = 1.08 milliseconds
median = 1.39 milliseconds
75% <= 2.46 milliseconds
95% <= 3.14 milliseconds
98% <= 3.44 milliseconds
99% <= 3.76 milliseconds
99.9% <= 12.19 milliseconds
Here are #CompileStatic metrics with ind turned off. BTW, there is no reason to use #CompileStatic if "indy" is turned on.
count = 139724
mean rate = 8950.43 calls/second
1-minute rate = 2011.54 calls/second
5-minute rate = 426.96 calls/second
15-minute rate = 143.76 calls/second
min = 0.02 milliseconds
max = 24.18 milliseconds
mean = 0.08 milliseconds
stddev = 0.72 milliseconds
median = 0.06 milliseconds
75% <= 0.08 milliseconds
95% <= 0.11 milliseconds
98% <= 0.15 milliseconds
99% <= 0.20 milliseconds
99.9% <= 1.27 milliseconds

cplex prints a lot to terminal although corresponding parameters are set

I am using CPLEX in Cpp.
After googling I found out what parameters need to be set to avoid cplex from printing to terminal and I use them like this:
IloCplex cplex(model);
std::ofstream logfile("cplex.log");
cplex.setOut(logfile);
cplex.setWarning(logfile);
cplex.setError(logfile);
cplex.setParam(IloCplex::MIPInterval, 1000);//Controls the frequency of node logging when MIPDISPLAY is set higher than 1.
cplex.setParam(IloCplex::MIPDisplay, 0);//MIP node log display information-No display until optimal solution has been found
cplex.setParam(IloCplex::SimDisplay, 0);//No iteration messages until solution
cplex.setParam(IloCplex::BarDisplay, 0);//No progress information
cplex.setParam(IloCplex::NetDisplay, 0);//Network logging display indicator
if ( !cplex.solve() ) {
....
}
but yet cplex prints such things:
Warning: Bound infeasibility column 'x11'.
Presolve time = 0.00 sec. (0.00 ticks)
Root node processing (before b&c):
Real time = 0.00 sec. (0.01 ticks)
Parallel b&c, 4 threads:
Real time = 0.00 sec. (0.00 ticks)
Sync time (average) = 0.00 sec.
Wait time (average) = 0.00 sec.
------------
Total (root+branch&cut) = 0.00 sec. (0.01 ticks)
Is there any way to avoid printing them?
Use setOut method from IloAlgorithm class (IloCplex inherits from IloAlgorithm). You can set a null output stream as a parameter and prevent logging the message on the screen.
This is what works in C++ according to cplex parameters doc:
cplex.setOut(env.getNullStream());
cplex.setWarning(env.getNullStream());
cplex.setError(env.getNullStream());

HyperTable: Loading data using Mutators Vs. LOAD DATA INFILE

I am starting a discussion, which I hope, will become one place to discuss data loading method using mutators Vs. loading using flat file via 'LOAD DATA INFILE'.
I have been baffled to get enormous performance gain using mutators (using batch size = 1000 or 10000 or 100K et cetera).
My project involved loading close to 400 million rows of social media data into HyperTable to be used for real time analytics. It took me close to 3 days to just load just 1 million row of data (code sample below). Each row is approximately 32 byte. So, in order to avoid taking 2-3 weeks to load this much data, I prepared a flat file with rows and used DATA LOAD INFILE method. Performance gain was amazing. Using this method, loading rate was 368336 cells/sec.
See below for actual snapshot of action:
hypertable> LOAD DATA INFILE "/data/tmp/users.dat" INTO TABLE users;
Loading 7,113,154,337 bytes of input data...
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Load complete.
Elapsed time: 508.07 s
Avg key size: 8.92 bytes
Total cells: 218976067
Throughput: 430998.80 cells/s
Resends: 2210404
hypertable> LOAD DATA INFILE "/data/tmp/graph.dat" INTO TABLE graph;
Loading 12,693,476,187 bytes of input data...
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Load complete.
Elapsed time: 1189.71 s
Avg key size: 17.48 bytes
Total cells: 437952134
Throughput: 368118.13 cells/s
Resends: 1483209
Why is performance difference between 2 method is so vast? What's the best way to enhance mutator performance. Sample mutator code is below:
my $batch_size = 1000000; # or 1000 or 10000 make no substantial difference
my $ignore_unknown_cfs = 2;
my $ht = new Hypertable::ThriftClient($master, $port);
my $ns = $ht->namespace_open($namespace);
my $users_mutator = $ht->mutator_open($ns, 'users', $ignore_unknown_cfs, 10);
my $graph_mutator = $ht->mutator_open($ns, 'graph', $ignore_unknown_cfs, 10);
my $keys = new Hypertable::ThriftGen::Key({ row => $row, column_family => $cf, column_qualifier => $cq });
my $cell = new Hypertable::ThriftGen::Cell({key => $keys, value => $val});
$ht->mutator_set_cell($mutator, $cell);
$ht->mutator_flush($mutator);
I would appreciate any input on this? I don't have tremendous amount of HyperTable experience.
Thanks.
If it's taking three days to load one million rows, then you're probably calling flush() after every row insert, which is not the right thing to do. Before I describe hot to fix that, your mutator_open() arguments aren't quite right. You don't need to specify ignore_unknown_cfs and you should supply 0 for the flush_interval, something like this:
my $users_mutator = $ht->mutator_open($ns, 'users', 0, 0);
my $graph_mutator = $ht->mutator_open($ns, 'graph', 0, 0);
You should only call mutator_flush() if you would like to checkpoint how much of the input data has been consumed. A successful call to mutator_flush() means that all data that has been inserted on that mutator has durably made it into the database. If you're not checkpointing how much of the input data has been consumed, then there is no need to call mutator_flush(), since it will get flushed automatically when you close the mutator.
The next performance problem with your code that I see is that you're using mutator_set_cell(). You should use either mutator_set_cells() or mutator_set_cells_as_arrays() since each method call is a round-trip to the ThriftBroker, which is expensive. By using the mutator_set_cells_* methods, you amortize that round-trip over many cells. The mutator_set_cells_as_arrays() method can be more efficient for languages where object construction overhead is large in comparison to native datatypes (e.g. string). I'm not sure about Perl, but you might want to give that a try to see if it boosts performance.
Also, be sure to call mutator_close() when you're finished with the mutator.

Jmeter Summary Report analysis

Can anyone please explain like how to analyze jmeter's Summary Report?
Example:
Label : Login Action(sampler)
Sample# : 1
average: 104 // What does this mean actually?
min : 104 // What does this mean actually?
max : 104
stddev : 0 // What does this mean actually?
error% : 0
Throughput : 9.615384615 // What does this mean actually?
Kb/Sec : 91.74053486 // What does this mean actually?
Average Bytes : 9770 // What does this mean actually?
It is pretty straightforward:
Average, min and max is the response times for the request in milliseconds. The response time os from the request is sent to the response is received. Since you have only one request they are of course all equal.
stddev is a measure of the variation of the response times: http://en.wikipedia.org/wiki/Standard_deviation.
Throughput is number of requests per second. With a average response time a little over 100ms the throughput is a little below 10.
Kb/Sec is the number of kilobytes transferred per second. It is Average Bytes (per request) * Throughput. I am not sure if the average bytes is for only the response or both the request and the response. My guess is that it is the latter. It is the latter: response headers and body.
I have just found the very nice and simple explanation here:
http://jmeterresults.blogspot.jp/2012/07/jmeterunderstanding-summary-report.html

Resources