I am trying to traversal over graph nodes, and execute some command for each node. Like this:
neo4j-sh (0)$ trav -d 2 -c "ls $i"
But I always get the error:
Thread[...] already has a transaction bound
What is wrong? Is it a Neo4j bug?
it's a bug and a sign that one one used this command for at least 2 years :)
you can run the equivalent cypher:
WITH {self} as n
MATCH (n)-[*2]-(m)
RETURN m;
Related
I have an infinite loop which uses aws cli to get the microservice names, it's parameters like desired tasks,number of running task etc for an environment.
There are 100's of microservices running in an environment. I have a requirement to compare the value of aws ecs metric running task for a particular microservice in the current loop and with that of the previous loop.
Say name a microservice X has the metric running task 5. As it is an infinite loop, after some time, again the loop come for the microservice X. Now, let's assume the value of running task is 4. I want to compare the running task for currnet loop, which is 4 with the value of the running task for the previous run, which is 5.
If you are asking a generic question of how to keep a previous value around so it can be compared to the current value, just store it in a variable. You can use the following as a starting point:
#!/bin/bash
previousValue=0
while read v; do
echo "Previous value=${previousValue}; Current value=${v}"
previousValue=${v}
done
exit 0
If the above script is called testval.sh. And you have an input file called test.in with the following values:
2
1
4
6
3
0
5
Then running
./testval.sh <test.in
will generate the following output:
Previous value=0; Current value=2
Previous value=2; Current value=1
Previous value=1; Current value=4
Previous value=4; Current value=6
Previous value=6; Current value=3
Previous value=3; Current value=0
Previous value=0; Current value=5
If the skeleton script works for you, feel free to modify it for however you need to do comparisons.
Hope this helps.
I dont know how your input looks exactly, but something like this might be useful for you :
The script
#!/bin/bash
declare -A app_stats
while read app tasks
do
if [[ ${app_stats[$app]} -ne $tasks && ! -z ${app_stats[$app]} ]]
then
echo "Number of tasks for $app has changed from ${app_stats[$app]} to $tasks"
app_stats[$app]=$tasks
else
app_stats[$app]=$tasks
fi
done <<< "$( cat input.txt)"
The input
App1 2
App2 5
App3 6
App1 6
The output
Number of tasks for App1 has changed from 2 to 6
Regards!
I have a fortran program (which I cannot modify) that requires several inputs from the user (in the command line) when it is run. The program takes quite a while to run, and I would like to retain use of the terminal by running it in the background; however, this is not possible due to its interactive nature.
Is there a way, using a bash script or some other method, that I can pass arguments to the program without directly interacting with it via the command line?
I'm not sure if this is possible; I tried searching for it but came up empty, though I'm not exactly sure what to search for.
Thank you!
ps. I am working on a unix system where I cannot install things not already present.
You can pipe it in:
$ cat delme.f90
program delme
read(*, *) i, j, k
write(*, *) i, j, k
end program delme
$ echo "1 2 3" | ./delme
1 2 3
$ echo "45 46 47" > delme.input
$ ./delme < delme.input
45 46 47
$ ./delme << EOF
> 3 2 1
> EOF
3 2 1
I am running my executable with OpenMPI on a cluster using the SLURM resource managing software. I would like to find a way to specify how many and which processes should be assigned to each of the nodes, where the number of processes might be different for each node.
An example to clarify what I am looking for: Suppose I want to run 7 processes on 3 nodes. Then I want to be able to say something like:
node 1 should run the process with rank n, node 2 and 3 should each run 3 of the remaining processes.
I do not care which physical node is node 1, as all the nodes are equal on the cluster I am using. Also I do not know a priori which nodes I will get assigned by SLURM, so I cannot hard-code the nodes' names in a hostfile. An example in the OpenMPI documentation that I found would define the hostfile like this for my example:
aa slots=1
bb slots=3
cc slots=3
but I have two problems with this approach:
I do not know a priori the names aa, bb, cc of the nodes.
Even if I knew them, the process on node aa does not necessarily have the right rank.
Thanks to Hristo Iliev's comment, I found the solution for the example stated in the question:
#!/bin/bash
HOSTFILE=./myHostfile
RANKFILE=./myRankfile
# Write the names of the nodes allocated by SLURM to a file
scontrol show hostname ${SLURM_NODELIST} > $HOSTFILE
# Number of processes
numProcs=7
# Number of nodes
numNodes=${SLURM_JOB_NUM_NODES}
# Counts the number of processes already assigned
count=0
while read p; do
# Write the node names to a rank file
if [ $count == 0 ]
then
echo "rank $count=$p slot=0-7" > $RANKFILE
let count=$count+1
let numNodes=$numNodes-1 # Number of nodes that are still available
else
# Compute the number of processes that should be assigned to this node
# by dividing the number of processes that still need to be assigned by
# the number of nodes that are still available. (This automatically "floor"s the result.)
let numProcsNode=($numProcs-$count)/$numNodes
for i in `seq 1 $numProcsNode`
do
echo "rank $count=$p slot=0-7" >> $RANKFILE
let count=$count+1
done
let numNodes=$numNodes-1 # Number of nodes that are still available
fi
done < $HOSTFILE
mpirun --display-map -np $numProcs -rf $RANKFILE hostname
It is a bit ugly though. And probably "slot=0-7" should not have the "7" hard-coded.
I am working on a test in which I must find out the number of partitions of a table and check if it is right. If I use show partitions TableName I get all the partitions by name, but I wish to get the number of partitions, like something along the lines show count(partitions) TableName (which retuns OK btw.. so it's not good) and get 12 (for ex.).
Is there any way to achieve this??
Using Hive CLI
$ hive --silent -e "show partitions <dbName>.<tableName>;" | wc -l
--silent is to enable silent mode
-e tells hive to execute quoted query string
You could use:
select count(distinct <partition key>) from <TableName>;
By using the below command, you will get the all partitions and also at the end it shows the number of fetched rows. That number of rows means number of partitions
SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)];
< failed pictoral example >
You can use the WebHCat interface to get information like this. This has the benefit that you can run the command from anywhere that the server is accessible. The result is JSON - use a JSON parser of your choice to process the results.
In this example of piping the WebHCat results to Python, only the number 24 is returned representing the number of partitions for this table. (Server name is the name node).
curl -s 'http://*myservername*:50111/templeton/v1/ddl/database/*mydatabasename*/table/*mytablename*/partition?user.name=*myusername*' | python -c 'import sys, json; print len(json.load(sys.stdin)["partitions"])'
24
In scala you can do following:
sql("show partitions <table_name>").count()
I used following.
beeline -silent --showHeader=false --outputformat=csv2 -e 'show partitions <dbname>.<tablename>' | wc -l
Use the following syntax:
show create table <table name>;
I'm using ab to do some load testing, and it's important that the supplied querystring (or POST) parameters change between requests.
I.e. I need to make requests to URLs like:
http://127.0.0.1:9080/meth?param=0
http://127.0.0.1:9080/meth?param=1
http://127.0.0.1:9080/meth?param=2
...
to properly exercise the application.
ab seems to only read the supplied POST data file once, at startup, so changing its content during the test run is not an option.
Any suggestions?
You're going to need to use a more full-featured benchmarking tool like jMeter for this.
Add my recommendation for jMeter...it works very well!
You could also create a script that creates a second script with something like:
ab -n 1 -c 1 'http://yoursever.com/method?param=0' &
ab -n 1 -c 1 'http://yoursever.com/method?param=1' &
ab -n 1 -c 1 'http://yoursever.com/method?param=2' &
ab -n 1 -c 1 'http://yoursever.com/method?param=3' &
ab -n 1 -c 1 'http://yoursever.com/method?param=4' &
But that's only really useful if you're trying to simulate load and observe your server. The actual benchmarks will have to be collated if you want to check ab performance. At that point I'd just use jMeter. For my use, I just need to simulate load and the ab processes are light enough that running 100 like this is no problem.
Here is patched version of ab or patch:
http://www.andboson.com/?p=1372
this version is included that patch http://chrismiles.info/dev/testing/ab
also can read many post-data line by line
upd:
sample request:
./ab -v1 -n2 -c1 -T'application/json' -ppostfile http://api.webhookinbox.com/i/HX6mC1WS/in/
postfile content:
{"data1":1, "data2":"4"}
{"data0":0, "x":"y"}
upd2:
also alternative
https://github.com/andboson/ab-go