I'm trying to export this type of data in PostgreSQL
"WIFI:S:FIBRA-3;T:WPA;P:YOdfdgg4677;;";"2021-05-18 14:31:34"
"'":.56#!:&7:&":8";"2021-05-19 15:56:22"
but I am not able to recognize the first field correctly, I think because of the double quotes.
The command that I'm using is:
export \
--connect $DB_JDBC_URL_MAIN \
--username=$DB_USER \
--password="$DB_PASSWORD" \
--table "$DB_SCHEMA.$DB_TABLE" \
--export-dir $EXPORT_DIR \
--input-lines-terminated-by '\n' \
--input-fields-terminated-by ';' \
--input-null-string 'N/A' \
--optionally-enclosed-by '\"' \
--escaped-by \\ \
I hope you can help me.
Related
I am trying to capture the output in one of the file using
cat <<EOF> /var/log/awsmetadata.log
timestamp= $TIME, \
region= $REGION, \
instanceIp= $INSTANCE_IP, \
availabilityZone= $INSTANCE_AZ, \
instanceType= $INSTANCE_TYPE, \
EOF
Where the output created in the format of
cat /var/log/awsmeta.log
timestamp= 2020-11-04 18:51:17, region= us-west-2, instanceIp= 1.2.3.4, availabilityZone= us-west-2a,
How can i eliminate the wide spaces between each output line?
If you don't want redundant whitespaces simply do not add them:
$ cat <<EOF> /var/log/awsmetadata.log
> timestamp= $TIME, \
> region= $REGION, \
> instanceIp= $INSTANCE_IP, \
> availabilityZone= $INSTANCE_AZ, \
> instanceType= $INSTANCE_TYPE
> EOF
I often use sed or tr instead of cat for this sort of thing:
tr -s ' ' <<EOF > /var/log/awsmetadata.log
timestamp= $TIME, \
region= $REGION, \
instanceIp= $INSTANCE_IP, \
availabilityZone= $INSTANCE_AZ, \
instanceType= $INSTANCE_TYPE,
EOF
But it seems cleaner to not escape the newlines at all and do something like:
{ tr -d \\n <<-EOF; echo; } > /var/log/awsmetadata.log
timestamp= $TIME,
region= $REGION,
instanceIp= $INSTANCE_IP,
availabilityZone= $INSTANCE_AZ,
instanceType= $INSTANCE_TYPE,
EOF
(That solution uses the <<- form of the heredoc which redacts hardtabbed indenation. It will not remove leading spaces.)
OTOH, it seems weird to be using a here doc when you're just wanting to generate one line of output. Why not just use echo?
I have a template file like show below. I have a number of variables in it that I want to replace with values I peel off of a JSON doc. I'm able to do it with sed on the few simple ones, but I have problems doing it on <ARN> and others like that.
#test "Test <SCENARIO_NAME>--<EXPECTED_ACTION>" {
<SKIP_BOOLEAN>
testfile="data/<FILE_NAME>"
assert_file_exist $testfile
IBP_JSON=$(cat $testfile)
run aws iam simulate-custom-policy \
--resource-arns \
"<ARN>"
--action-names \
"<ACTION_NAMES>"
--context-entries \
"ContextKeyName='aws:PrincipalTag/Service', \
ContextKeyValues='svc1', \
ContextKeyType=string" \
"ContextKeyName='aws:PrincipalTag/Department', \
ContextKeyValues='shipping', \
ContextKeyType=string" \
<EXTRA_CONTEXT_KEYS>
--policy-input-list "${IBP_JSON}"
assert_success
<TEST_EXPRESSION>
}
I want the <ARN> placeholder to be replaced with the following text:
"arn:aws:ecs:*:588068252125:cluster/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:container-instance/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task-definition/${aws:PrincipalTag/Service}-*:*" \
"arn:aws:ecs:*:588068252125:service/${aws:PrincipalTag/Service}-*" \
How can I do that replacement while also preserving the formatting (\ and /r at line ends)?
The easiest is use bash itself:
original=$(cat file.txt)
read -r -d '' replacement <<'EOF'
"arn:aws:ecs:*:588068252125:cluster/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:container-instance/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task-definition/${aws:PrincipalTag/Service}-*:*" \
"arn:aws:ecs:*:588068252125:service/${aws:PrincipalTag/Service}-*" \
EOF
placeholder='"<ARN>"'
modified=${original/$placeholder/$replacement}
echo "$modified"
Look for ${parameter/pattern/string} in man bash.
If input.txt is the input file and replace.txt contains the replacement text:
$ cat input.txt
run aws iam simulate-custom-policy \
--resource-arns \
"<ARN>"
--action-names \
"<ACTION_NAMES>"
$ cat replace.txt
"arn:aws:ecs:*:588068252125:cluster/${aws:PrincipalTag/Service}-*" \\\
"arn:aws:ecs:*:588068252125:task/${aws:PrincipalTag/Service}-*" \\\
"arn:aws:ecs:*:588068252125:container-instance/${aws:PrincipalTag/Service}-*" \\\
"arn:aws:ecs:*:588068252125:task-definition/${aws:PrincipalTag/Service}-*:*" \\\
"arn:aws:ecs:*:588068252125:service/${aws:PrincipalTag/Service}-*"
then you can use sed with # delimiters to make the replacement:
$ sed "s#\"<ARN>\"#$(< replace.txt)#g" input.txt
run aws iam simulate-custom-policy \
--resource-arns \
"arn:aws:ecs:*:588068252125:cluster/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:container-instance/${aws:PrincipalTag/Service}-*" \
"arn:aws:ecs:*:588068252125:task-definition/${aws:PrincipalTag/Service}-*:*" \
"arn:aws:ecs:*:588068252125:service/${aws:PrincipalTag/Service}-*"
--action-names \
"<ACTION_NAMES>"
Here $(< replace.txt) is equivalent to $(cat replace.txt)
Since I have special char in one of the fields, I wanted to use lower value as delimiter. Hive works fine with the delimiter(\0) but sqoop fails with NoSuchElement Exception. Looks like it is not detecting the delimiter as \0.
This is how my hive an sqoop script looks like. Any help please.
CREATE TABLE SCHEMA.test
(
name CHAR(20),
id int,
dte_report date
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\0'
LOCATION '/user/$USER/test';
sqoop-export \
-Dmapred.job.name="TEST" \
-Dorg.apache.sqoop.export.text.dump_data_on_error=true \
--options-file ${OPTION_FILE_LOCATION}\conn_mysql \
--export-dir /user/$USER/test \
--input-fields-terminated-by '\0' \
--input-lines-terminated-by '\n' \
--input-null-string '\\N' \
--input-null-non-string '\\N' \
--table MYSQL_TEST \
--validate \
--outdir /export/home/$USER/javalib
In VI editor, the delimiter looks like '^#' and with od -c the delimiter is \0
Set the character set to UTF 8 in the my sql conn string that can resolve this issue.
mysql.url=jdbc:mysql://localhost:3306/nbs?useJvmCharsetConverters=false&useDynamicCharsetInfo=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&useEncoding=true
You should use \000 as delimiter , it will generate that character as a delimiter.
The Null values are displayed as '\N' when a hive external table is queried.
Below is the sqoop import script:
sqoop import -libjars /usr/lib/sqoop/lib/tdgssconfig.jar,/usr/lib/sqoop/lib/terajdbc4.jar -Dmapred.job.queue.name=xxxxxx \
--connect jdbc:teradata://xxx.xx.xxx.xx/DATABASE=$db,LOGMECH=LDAP --connection-manager org.apache.sqoop.teradata.TeradataConnManager \
--username $user --password $pwd --query "
select col1,col2,col3 from $db.xxx
where \$CONDITIONS" \
--null-string '\N' --null-non-string '\N' \
--fields-terminated-by '\t' --num-mappers 6 \
--split-by job_number \
--delete-target-dir \
--target-dir $hdfs_loc
Please advise what change should be done to the script so that nulls are displayed as nulls when the external hive table is queried.
Sathiyan- Below are my findings after many trials
If (null string) property is not included during sqoop import, then NULLs are stored as [blank for integer columns] and [blank for string columns] in HDFS.
2.If the HIVE table on top of HDFS is queried, we would see [NULL for integer column] and [blank for String columns]
If the (--null-string '\N') property is included during sqoop import, then NULLs are stored as ['\N' for both integer and string columns].
If the HIVE table on top of HDFS is queried, we would see [NULL for both integer and string columns not '\N']
In your sqoop script you mentioned --null-string '\N' --null-non-string '\N which means,
--null-string '\N' = The string to be written for a null value for string columns
--null-non-string '\N' = The string to be written for a null value for non-string columns
If any value is NULL in the table and we want to sqoop that table ,then sqoop will import NULL value as string null in HDFS. So, that will create problem to use Null condition in our query using hive
For example: – Lets insert NULL value to mysql table “cities”.
mysql> insert into cities values(6,7,NULL);
By default, Sqoop will import NULL value as string null in HDFS.
Lets sqoop and see what happens:–
sqoop import –connect jdbc:mysql://localhost:3306/sqoop –username sqoop -P –table cities –hive-import –hive-overwrite –hive-table vikas.cities -m 1
http://deltafrog.com/how-to-handle-null-value-during-sqoop-import-export/
In The sqoop import command remove the --null-string and --null-non-string '\N' option.
by default system will assign null for both strings and non string values.
I have tried --null-string '\N' and --null-string '' and other options but getting blank and different issues.
I have a problem with zenity I cannot work out. Could you guys help me?
I have a 7 line long tmp3 file:
AAA
BBB
...
FFF
GGG
I want to send this file through zenity so that it displays a checklist with the possibilty to check every line I want with every combination I want.
I previously wrote:
cat tmp3 | zenity --list \
--column='#' \
--text "Select playlist from the list below" \
--title "Please select one or more playlists" \
--multiple \
--width=300 \
--height=300 \
--checklist \
--column "Select" \
--separator="/ ")
All this does is create one single line in zenity with all 7 files of tmp3. Thats not what I want.
I currently wrote this:
choice=$(zenity --list \
--column "Playlists" FALSE $(cat tmp3) \
--text "Select playlist from the list below" \
--title "Please select one or more playlists" \
--multiple \
--width=300 \
--height=300 \
--checklist \
--column "Select" \
--separator="/ ")
Here something really weird happens that I dont understand. 4 out of 7 fields are created in zenity: AAA CCC EEE and GGG. But not the other ones. When I set -x for debugging I can see all 7 lines being piped to zenity... What is happening?????
I tried another solution by listing the 7 subfolders in my current folder (which happen to have the exact same name as the lines in tmp3). The same thing happens!:
I wrote this:
choice=$(zenity --list \
--column "Playlists" FALSE $(ls -d -1 */) \
--text "Select playlist from the list below" \
--title "Please select one or more playlists" \
--multiple \
--width=300 \
--height=300 \
--checklist \
--column "Select" \
--separator="/ ")
The second solution seems easier but my skills aren't very high. And I would like to understand the latter solution and why it does this.
Thank you guys!
EDIT:
I have found this and tried to make it work my way but no success so far...
http://www.linuxquestions.org/questions/programming-9/reading-lines-to-an-array-and-generate-dynamic-zenity-list-881421/
The part FALSE $(cat tmp3) expands to
FALSE AAA
BBB
CCC
DDD
EEE
FFF
GGG
What you need is
FALSE AAA
FALSE BBB
FALSE CCC
FALSE DDD
FALSE EEE
FALSE FFF
FALSE GGG
One way to achieve this is --column "Playlists" $(sed s/^/FALSE\ / tmp3) \
There's an interesting example in man zenity :
zenity \
--list \
--checklist \
--column "Buy" \
--column "Item" \
TRUE Apples \
TRUE Oranges \
FALSE Pears \
FALSE Toothpaste
You just need to turn on a neurone to adapt it a bit =)
EDIT:
if you have an undefined length list, this example will be more interesting :
find . -name '*.h' |
zenity \
--list \
--title "Search Results" \
--text "Finding all header files.." \
--column "Files"
I know I'm kinda late, but wanted just about the same thing, and figured it out in the end.
My solution does a search (hiding errors), adds TRUE and a newline to each result (that was the key!), then sends the result to zenity:
CHECKED=`find /music/folder -name "*.mp3" -type f 2>/dev/null | \
awk '{print "TRUE\n"$0}' | \
zenity --list --checklist --separator='\n' --title="Select Results." \
--text="Finding all MP3 files..." --column="" --column="Files"`
In your situation, I guess this should be:
CHECKED=`cat tmp3 | awk '{print "TRUE\n"$0}' | zenity --list --checklist \
--separator='/ ' --title="Select Results." \
--text="Finding all MP3 files..." --column="" --column="Select"`
So it seems Zenity puts each newline in a column, and fills the list that way. This means you can manipulate the strings going into Zenity to add any number of colums.
In short and clear summary, you have two options:
Option one: Input file, newline separate the columns
Instead of
cat tmp3 | zenity ... ...
do:
sed 's/^/.\n/' tmp3 | zenity ... ...
Option two: Inline command, the colunms are read as pairs from the command args
Instead of
cat tmp3 | zenity ... ...
do:
zenity ... ... `sed 's/^/. /' tmp3`
$ zenity --list --checklist --height 400 --text "Select playlist from the list below" --title "Please select one or more playlists" --column "Playlists" --column "Select" --separator="/ "
$(ls -d -1 */ | xargs -L1 echo FALSE)