Formatting Neo4J-Shell cypher query results? - shell

Is there an option to export the results of a Neo4J-Shell cypher query in a comma-separated-value format, i.e. instead of
echo "START n=node(*) MATCH n-[r]->m RETURN n.value, type(r), m.value ORDER BY n.value, type(r), m.value;" | neo4j-shell -v -path neo4j-database/ > /tmp/output.csv
less /tmp/output.csv
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| n.value | type(r) | m.value |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa" | "http://www.w3.org/1999/02/22-rdf-syntax-ns#type" | "http://www.w3.org/2002/07/owl#Class" |
| "http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa" | "http://www.w3.org/2000/01/rdf-schema#label" | "Rosa" |
| "http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa" | "http://www.w3.org/2000/01/rdf-schema#subClassOf" | "http://www.co-ode.org/ontologies/pizza/pizza.owl#NamedPizza" |
...
i would like to get the following output
less /tmp/output.csv
"http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa", "http://www.w3.org/1999/02/22-rdf-syntax-ns#type", "http://www.w3.org/2002/07/owl#Class"
"http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa", "http://www.w3.org/2000/01/rdf-schema#label", "Rosa"
"http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa", "http://www.w3.org/2000/01/rdf-schema#subClassOf", "http://www.co-ode.org/ontologies/pizza/pizza.owl#NamedPizza"
...
like in MySQL, where the ascii table is omitted when the client is used by an echo command from the shell.

You can use neo4j-JDBC to run your cypher queries via JDBC. With that in place you can use any JBCD tool that allows you to create csv.
Use the groovy script from https://gist.github.com/5736410

Related

Can´t save files using ftp on genexus

I am trying to save a file on a library inside a Iseries database using the GxFtpPut on genexus 10 V3 with .net but when sending the file genexus tries to send it to a windows directory instead of sending it to the library which works using the ftp command on the cmd
I've already tried to changing the route is using to no avail and trying to find another way of sending the file through genexus.
for example when using the cmd I just put this :
put C:\FILES\Filename.txt Library/Filename
And it works on sending the file inside the library,
but when doing this on genexus:
Call("GxFtpPut", &FileDirectory , 'Library/'+&FileName,'B' )
Does not work and tries to find a directory with that name inside the windows files of the server
I just want to be able to send it to the server library without issue.
IBM i has two distinct name formats depending on the file system you are trying to use. NAMEFMT 0 is the library/filename format, and is likely unknown to PC FTP clients. NAMEFMT 1 is the typical hierarchical directory path used by non-IBM i computers, and also works with IBM i if you want to put a file anywhere in the IFS (Integrated File System).
Fun fact, the native library file system is also accessible from the IFS. But to address it you need to use a format that might be a little unfamiliar. /QSYS.lib/library.lib/filename.file/membername.mbr You may be able to drop the member name.
To change name format, you can issue the SITE sub-command on your remote host like this:
QUOTE SITE NAMEFMT 0 -- This sets name format 0 (library/filename)
QUITE SITE NAMEFMT 1 -- This sets name format 1 (directory path)
I did some testing with a plain Windows FTP client. The test file on the PC was a text file created in Notepad++. Turns out that we start out in NAMEFMT 0 unless it is changed. It looks like genexus only supports a limited set of commands. So here is the limited FTP script that works:
ascii
put test.txt mylib/testpf
I can now pull up testpf on the greenscreen utilities and read it. I can also read testpf in my GUI SQL client. The ASCII text has been converted properly to EBCDIC.
|TESTPF |
|--------------------------------------------------------------------------------|
| |
|// ------------------------------------ |
|// Sweep |
|// |
|// Performs the sweep logic |
|// ------------------------------------ |
|dcl-proc Sweep; |
| |
| |
| exec sql |
| update atty a |
| set ymglsb = (select ymglsb from glaty |
| where atty = a.atty) |
| where atty in (select atty from glaty where atty = a.atty); |
|// where ymglsb in (select ymglsb from glaty where atty = a.atty); |
| if %subst(sqlstate: 1: 2) < '00' or |
| %subst(sqlstate: 1: 2) > '02'; |
| exec sql get diagnostics condition 1 |
| :message = message_text; |
| SendSqlMsg('02: ' + message); |
| endif; |
| |
| exec sql |
| update atty a |
| set ymglsb = '000' |
| where not exists (select * from glaty where atty = a.atty); |
| if %subst(sqlstate: 1: 2) < '00' or |
| %subst(sqlstate: 1: 2) > '02'; |
| exec sql get diagnostics condition 1 |
| :message = message_text; |
| SendSqlMsg('03: ' + message); |
| endif; |
| |
|end-proc; |
However, if I try to transfer in binary mode, the resulting data in the file looks like this:
|TESTPF |
|--------------------------------------------------------------------------------|
|ëÏÁÁø&ÁÊÃ?Ê_ËÈÇÁËÏÁÁø% |
|?ÅÑÄÀÄ%øÊ?ÄëÏÁÁøÁÌÁÄËÉ% |
|ÍøÀ/ÈÁ/ÈÈ`/ËÁÈ`_Å%ËÂËÁ%ÁÄÈ`_Å%ËÂÃÊ?_Å%/È` |
|ÏÇÁÊÁ/ÈÈ`//ÈÈ`ÏÇÁÊÁ/ÈÈ`Ñ>ËÁ%ÁÄÈ/ÈÈ`ÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ |
|`//ÈÈ`ÏÇÁÊÁ`_Å%ËÂÑ>ËÁ%ÁÄÈ`_Å%ËÂÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ`//ÈÈ |
|`ÑöËÍÂËÈËÉ%ËÈ/ÈÁ?ʶËÍÂËÈËÉ%ËÈ/ÈÁ |
|ÁÌÁÄËÉ%ÅÁÈÀÑ/Å>?ËÈÑÄËÄ?>ÀÑÈÑ?>_ÁËË/ÅÁ_ÁËË/ÅÁ¬ÈÁÌÈ |
|ëÁ>ÀëÉ%(ËÅ_ÁËË/ÅÁÁ>ÀÑÃÁÌÁÄËÉ%ÍøÀ/ÈÁ/ÈÈ`/ |
|ËÁÈ`_Å%ËÂÏÇÁÊÁ>?ÈÁÌÑËÈËËÁ%ÁÄÈÃÊ?_Å%/È`ÏÇÁÊÁ/ÈÈ`// |
|ÈÈ`ÑöËÍÂËÈËÉ%ËÈ/ÈÁ?ʶËÍÂËÈËÉ%ËÈ/ÈÁ |
|ÁÌÁÄËÉ%ÅÁÈÀÑ/Å>?ËÈÑÄËÄ?>ÀÑÈÑ?>_ÁËË/ÅÁ_ÁËË/ÅÁ¬ÈÁÌÈ |
|ëÁ>ÀëÉ%(ËÅ_ÁËË/ÅÁÁ>ÀÑÃÁ>ÀøÊ?Ä |
This has not been converted because we have told IBM i FTP server not to convert to EBCDIC because it is binary.
So try ASCII mode, use the library/filename format. The target file does not need to pre-exist.

Fetch particular column value from rows with specified condition using shell script

I have a sample output from a command
+--------------------------------------+------------------+---------------------+-------------------------------------+
| id | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+-------------------------------------+
| 04584e8a-c210-430b-8028-79dbf741797c | | 99.99.99.91 | |
| 12d2257c-c02b-4295-b910-2069f583bee5 | 20.0.0.92 | 99.99.99.92 | 37ebfa4c-c0f9-459a-a63b-fb2e84ab7f92 |
| 98c5a929-e125-411d-8a18-89877d3c932b | | 99.99.99.93 | |
| f55e54fb-e50a-4800-9a6e-1d75004a2541 | 20.0.0.94 | 99.99.99.94 | fe996e76-ffdb-4687-91a0-9b4df2631b4e |
+--------------------------------------+------------------+---------------------+-------------------------------------+
Now I want to fetch all the "floating _ip_address" for which "port_id" & "fixed_ip_address" fields are blank/empty (In above sample 99.99.99.91 & 99.99.99.93)
How can I do it with shell scripting?
You can use sed:
fl_ips=($(sed -nE 's/\|.*\|.*\|(.*)\|\s*\|/\1/p' inputfile))
Here inputfile is the table provided in the question. The array fl_ips contains the output of sed:
>echo ${#fl_ips[#]}
2 # Array has two elements
>echo ${fl_ips[0]}
99.99.99.91
>echo ${fl_ips[1]}
99.99.99.93

How do we get the 1000 tables description using hive?

I have 1000 tables, need to check the describe <table name>; for one by one. Instead of running one by one, can you please give me one command to fetch "N" number of tables in a single shot.
You can make a shell script and call it with a parameter. For example following script receives schema, prepares list of tables in the schema, calls DESCRIBE EXTENDED command, extracts location, prints table location for first 1000 tables in the schema ordered by name. You can modify and use it as a single command:
#!/bin/bash
#Create table list for a schema (script parameter)
HIVE_SCHEMA=$1
echo Processing Hive schema $HIVE_SCHEMA...
tablelist=tables_$HIVE_SCHEMA
hive -e " set hive.cli.print.header=false; use $HIVE_SCHEMA; show tables;" 1> $tablelist
#number of tables
tableNum_limit=1000
#For each table do:
for table in $(cat $tablelist|sort|head -n "$tableNum_limit") #add proper sorting
do
echo Processing table $table ...
#Call DESCRIBE
out=$(hive client -S -e "use $HIVE_SCHEMA; DESCRIBE EXTENDED $table")
#Get location for example
table_location=$(echo "${out}" | egrep -o 'location:[^,]+' | sed 's/location://')
echo Table location: $table_location
#Do something else here
done
Query the metastore
Demo
Hive
create database my_db_1;
create database my_db_2;
create database my_db_3;
create table my_db_1.my_tbl_1 (i int);
create table my_db_2.my_tbl_2 (c1 string,c2 date,c3 decimal(12,2));
create table my_db_3.my_tbl_3 (x array<int>,y struct<i:int,j:int,k:int>);
MySQL (Metastore)
use metastore
;
select d.name as db_name
,t.tbl_name
,c.integer_idx + 1 as col_position
,c.column_name
,c.type_name
from DBS as d
join TBLS as t
on t.db_id =
d.db_id
join SDS as s
on s.sd_id =
t.sd_id
join COLUMNS_V2 as c
on c.cd_id =
s.cd_id
where d.name like 'my\_db\_%'
order by d.name
,t.tbl_name
,c.integer_idx
;
+---------+----------+--------------+-------------+---------------------------+
| db_name | tbl_name | col_position | column_name | type_name |
+---------+----------+--------------+-------------+---------------------------+
| my_db_1 | my_tbl_1 | 1 | i | int |
| my_db_2 | my_tbl_2 | 1 | c1 | string |
| my_db_2 | my_tbl_2 | 2 | c2 | date |
| my_db_2 | my_tbl_2 | 3 | c3 | decimal(12,2) |
| my_db_3 | my_tbl_3 | 1 | x | array<int> |
| my_db_3 | my_tbl_3 | 2 | y | struct<i:int,j:int,k:int> |
+---------+----------+--------------+-------------+---------------------------+

In RobotFramework, is it possible to run test cases in For-Loop?

So my issues might be of syntactic nature, maybe not, but I am clueless on how to proceed next. I am writing a test case on the Robot Framework, and my end goal is to be able to run ,multiple tests, back to back in a Loop.
In this cases below, the Log to Console call works fine, and outputs the different values passed as parameters. The next call "Query Database And Analyse Data" works as well.
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
But then, when I try to makes a test cases with documentation and tags with "Query Database And Analyse Data", I get the Error: Keyword Name cannot be Empty, which leads me to think that when the file gets to [Documentation tag], it doesn't understand that it is part of a test case. This is usually how I write test cases.
Please note here that the indentation tries to match the inside of the loop
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | | [Documentation] | Query DB.
| | | | [Tags] | query | voltagevariation
| | | Duplicates Test
| | | | [Documentation] | Packets should be unique.
| | | | [Tags] | packet_duplicates | system
| | | | Duplicates
| | | Chroma Output ON
| | | | [Documentation] | Setting output terminal status to ON
| | | | [Tags] | set_output_on | voltagevariation
| | | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
Now is this a syntax problem, indentation issue, or is it just plain impossible to do what I'm trying to do? If you have written similar cases, but in a different manner, please let me know!
Any help or input would be highly appreciated!
You are trying to use Keywords as Test Cases. This approach is not supported by Robot Framework.
What you could do is make one Test Case with a lot of Keywords:
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | Duplicates
| | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
*** Keywords ***
| Query Database And Analyse Data
| | Do something
| | Do something else
...
You can't really fit [Tags] anywhere useful. You can, however, fire meaningful fail messages (substituting the [Documentation]) if instead of using a Keyword directly you wrapped it in Run Keyword And Return Status.
Furthermore, please have a look at data driven tests to get rid of the :FOR-loop completely.

Neo4j - export cypher query file results to .csv or .txt file via the Neo4jShell

I have a .cql query file that I want to run from the Neo4jShell (Windows) with the command:
Neo4j> Neo4jShell -file query.cql
The query returns some rows of data. How can I write that query output into a .csv or .txt file from the shell?
Also, I am using the Windows command prompt so take that into consideration with any solutions. Thanks!
UPDATE 1:
The command suggested by Luane essentially works:
Neo4j> Neo4jShell -file query.cql > out.csv
The only issue is that the output isn't comma separated:
+--------------------------+
| column 1 | column 2 |
+--------------------------+
| "C1611640" | "C1265875" |
| "C1579268" | "C1265875" |
| "C1570906" | "C1265875" |
| "C1522577" | "C1265875" |
| "C1519033" | "C1265875" |
| "C1515119" | "C1265875" |
| . . |
| . . |
| . . |
| "C1533658" | "C1265875" |
| "C1532338" | "C1265875" |
| "C1527144" | "C1265875" |
+--------------------------+
2000 rows
219 ms
Assuming your query returns data in the format you require, you can just send the output to any file. This works on MacOS, but I don't see why it should not work on Windows:
> neo4j-shell -file query.cql > out.txt

Resources