I'm trying to perform a "SELECT" statement query in a bash script to get display data from clickhouse, is working but not displaying the field name, but if I do it directly from the clickhouse-client it display the column field.
Example in clickhouse-client:
ubuntu :) SELECT name FROM persons
Output:
┌─────name─────┐
│ George │
│ Michael │
│ Robert │
───────────────
but if I do the same in shell is displayed like this:
DBQuery="SELECT name FROM persons"
clickhouse-client --query="${DBQuery}"
Output:
George
Michael
Robert
Do you know how can be the output in bash for the output looks like table format?
I was able to do it putting the flag --format="Pretty"
You can find all the formats in: https://clickhouse.com/docs/en/interfaces/formats/#formats
Related
so i have a folder structure that i want to make a tree of (im currently using the tree command), the current output looks like this:
C:.
└───example
├───example2
└───folder with link
│ link to example 2.lnk
│
└───other folder
would it be possible to show the destiny of the link?
example:
C:.
└───example
├───example2
└───folder with link
│ link to example 2.lnk -> example2
│
└───other folder
it doesn't have to look exactly like that, i just want to see the link destination
i tried to find something on the internet but the only thing i found, was a linux solution that looked like this
tree -l
.
├── aaa
│ └── bbb
│ └── ccc
└── slink -> /home/kaa/test/aaa/bbb
sadly -l or /l doesn't exist in windows
If I have tree output in terminal with less with this function
function tre() {
tree -aC -I '.git|node_modules|bower_components' --dirsfirst "$#" | less -FRNX;
}
, it will scroll 1 line by pressing key each time.
I need a shorcut or command to reach and of file.
If I press "G" the output would be with "...skipping..."
19 │ │ │ └── someotherfile.db
20 │ │ ├── static
...skipping...
62 │ │ ├── user
63 │ │ │ ├── admin.py
How do I get to the end of file with all lines loaded without "...skipping..."?
The issue was with this
less -FRNX;
The last (X) forced the output line by line. So the solution was not to use it
less -FRN;
(Why I use less for tree output)
On screenshot below is the difference between default tree output and output it with less. Same folder, but with less output is with colors, line numbers and directory first.
enter image description here
I'm having issues while trying to insert data using a csv file to clickhouse using CURL, the very first value is adding some characters and looks like follow:
┌─name─ ┬─lastname─┐
│&'Mark'│ Olson │
│ Joe │ Robins │
└────── ┴──────────┘
my CSV file is ok, it is like this:
'Mark','Olson'
'Joe','Robins'
as you can see the table is adding the first value in first record as &'Mark'
This is my code in bash
query="INSERT INTO Myschema.persons FORMAT CSV"
cat ${csv} | curl -X POST -d "$query" $user:$password#localhost:8123 --data-binary #-
Do you know what's the problem?
Thanks
I think you should use following format where query is part of url
cat *.csv |curl http://localhost:8123/?query=INSERT%20INTO%20Myschema.persons%20FORMAT%20CSV' --data-binary #-
I am not sure why your curl is not working but my best guess is that Clickhouse has parsing rules which are not able to consume your specified format, ${query} and ${csv} both being parameters to POST are getting appended by '&' in final http url, while parsing Clickhouse is unable to consider this case.
Quotes from clickhouse documentation -
You can send the query itself either in the POST body, or in the URL
parameter.
and
The POST method of transmitting data is necessary for INSERT queries.
In this case, you can write the beginning of the query in the URL
parameter, and use POST to pass the data to insert. The data to insert
could be, for example, a tab-separated dump from MySQL. In this way,
the INSERT query replaces LOAD DATA LOCAL INFILE from MySQL.
Check here for more details and examples - https://clickhouse.yandex/docs/en/interfaces/http_interface/
I have made a simple cron job by typing the following commands
crontab -e
then in the vi file opened I types
* * * * * * echo 'leon trozky' >> /Users/whitetiger/Desktop/foo.txt 2>&1
the file foo.txt indeed gets created, but its content is
/bin/sh: Applications: command not found
I'm guessing this has to do with the PATH value of cron. Is there any way to set the PATH in the cron file such that when I transfer it to another mac I won't have to set the PATH manually? is this even a PATH problem?
I think you got one too many *'s there. And yes you can set the PATH variable in cron. A couple of ways. But your problem is the extra *.
Yeah your syntax is 1 * more than it should be, just providing more info adding to Red Cricket's answer, crontab syntax should be
* * * * * command to execute
│ │ │ │ │
│ │ │ │ └─── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0)
│ │ │ └──────── month (1 - 12)
│ │ └───────────── day of month (1 - 31)
│ └────────────────── hour (0 - 23)
└─────────────────────── min (0 - 59)
Provided a partitioned fs structure like the following:
logs
└── log_type
└── 2013
├── 07
│ ├── 28
│ │ ├── host1
│ │ │ └── log_file_1.csv
│ │ └── host2
│ │ ├── log_file_1.csv
│ │ └── log_file_2.csv
│ └── 29
│ ├── host1
│ │ └── log_file_1.csv
│ └── host2
│ └── log_file_1.csv
└── 08
I've been trying to create an external table in Impala:
create external table log_type (
field1 string,
field2 string,
...
)
row format delimited fields terminated by '|' location '/logs/log_type/2013/08';
I wish Impala would recurse into the subdirs and load all the csv files; but no cigar.
No errors are thrown but no data is loaded into the table.
Different globs like /logs/log_type/2013/08/*/* or /logs/log_type/2013/08/*/*/* did not work either.
Is there a way to do this? Or should I restructure the fs - any advice on that?
in case you are still searching for an answer.
You need to register each individual partition manually.
See here for details Registering External Table
Your schema for the table needs to be adjusted
create external table log_type (
field1 string,
field2 string,
...)
partitioned by (year int, month int, day int, host string)
row format delimited fields terminated by '|';
After you changed your schema, to include year, month, day and host, you recursively have to add each partition to the table.
Something like this
ALTER TABLE log_type ADD PARTITION (year=2013, month=07, day=28, host="host1")
LOCATION '/logs/log_type/2013/07/28/host1';
Afterwards you need to refresh the table in impala.
invalidate log_type;
refresh log_type;
Another way to do this might be to use the LOAD DATA function in Impala. If your data is in a SequenceFile or other less Impala-friendly format (Impala file formats), you can create your external table like Joey does above but instead of ALTER TABLE, you can do something like
LOAD DATA INPATH '/logs/log_type/2013/07/28/host1/log_file_1.csv' INTO TABLE log_type PARTITION (year=2013, month=07, day=28, host=host1);
With the newer versions of impala you can use the
ALTER TABLE name RECOVER PARTITIONS
command. More info
What you have to be careful about is that, the partitioning fields has to be lowercase as the the directory structure is case sensitive but the impala queries are not.