i migrated from Flyway to Liquibase and now i want to use the following command to export our data.
The diffExcludeObjects functionality works great with tables. In my example, it doesn't export the tables CONTENT_IMAGE and schema_version.
It should work with columns, too (http://www.liquibase.org/2015/01/liquibase-3-3-2-released.html), but it doesn't. My export csv contains the column MAIL_ADDRESS. I haven't found a way to exclude specific columns from an export. My table with the column looks like this.
Also the mvn command doc (https://www.liquibase.org/documentation/maven/maven_generateChangeLog.html) sais, that it should work.
Command
mvn org.liquibase:liquibase-maven-plugin:generateChangeLog \
-Dliquibase.propertyFile="src/main/resources/db/liquibase/connection-to-database.properties" \
-Dliquibase.dataDir="src/main/resources/db/liquibase/export" \
-Dliquibase.diffTypes="data" \
-Dliquibase.diffExcludeObjects="table:CONTENT_IMAGE, column:MAIL_ADDRESS, table:schema_version" \
-Dliquibase.promptOnNonLocalDatabase="false"
Do i understand this feature wrong? Do i use it wrong? Please, i need help.
Thank you.
Related
Thank you in advance for your time on this.
Is there a way to tell zap api scan, using docker run -i owasp/zap2docker-stable zap-api-scan.py, what queries and/or mutations from my graphql schema to hit during scan and which to exclude from the scan or do I need to set up my schema file to only include what I want scanned?
My problem is that the schema I am trying to scan is massive. I only want to scan like 15 mutations out of about 200...
Something like:
docker run -i owasp/zap2docker-stable zap-api-scan.py \
-t https://mytarget.com -f graphql \
-f graphql \
-schema schema-file.graphql \
--include-mutations file-with-list-of-mutations-to-include
The packaged scans are quite flexible, and do allow you to specify exactly which scan rules to run and which 'strength' to use for each rule.
However there are limits to what you can easily acheive, so you might want to look at the Automation Framework which is much more flexible.
I have a .csv file with the following sample data format:
REFID|PARENTID|QTY|DESCRIPTION|DATE
AA01|1234|1|1st item|null
AA02|12345|2|2nd item|null
AA03|12345|3|3rd item|null
AA04|12345|4|4th item|null
To load the above file into a table I am using below BCP command:
/bcp $TABLE_NAME in $FILE_NAME -S $DB_SERVER -t "|" -F 1 -U $DB_USERNAME -d $DB_NAME
What i am trying to look here is like below (adding sysdate instead of null from bcp)
AA01|1234|1|1st item|3/16/2020
AA02|12345|2|2nd item|3/16/2020
AA03|12345|3|3rd item|3/16/2020
AA04|12345|4|4th item|3/16/2020
Update : I was able to exclude header with #Jamie answer by -F 1 option, but looking for some help on inserting date with bcp. Tried looking some old Q&A, but no luck so far..
To exclude a single header record, you can use the -F option. This will tell BCP which line in the file is the first line to begin loading from. For your sample, -F2 should work fine. However, your command has other issues. See comments.
There is no way to introduce new data using the BCP command as you stated. BCP cannot introduce a date value while copying data into your table. To accomplish this I suggest a default for your date column or to first load the raw data into a table without the date column then you can introduce the date value as you see fit in late processing.
I need to download an xls table from a url that require two arrays as parameters.
First orderids.
Second columns.
For every order i have that columns.
wget --load-cookies cookies.txt \
--post-data='orderids=xxxx,xxxx,xxxx&columns=x,x,x,x,x' \
https://www.url.com/createordersexcel
In this way i get only first value inserted
That wget command looks OK, and worked in a small test I did.
Your problem is probably on the server side.
Hive query output that is using UDFs consists of these 2 warnings at the end. How do I suppress these 2 warnings. Please note that the 2 warnings come right after the output as part of output.
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
hadoop version
Hadoop 2.6.0-cdh5.4.0
hive --version
Hive 1.1.0-cdh5.4.0
If you use beeline instead of Hive the error goes away. Not the best solution, but I'm planning to post to the CDH user group asking the same question to see if it's a bug that can be fixed.
This error occurs due to adding of assembly jar which which contains classes from icl-over-slf4j.jar (which is causing the stdout messages) and slf4j-log4j12.jar.
You can try couple of things to begin with:
Try removing the assembly jar, in case if using.
Look at the following link: https://issues.apache.org/jira/browse/HIVE-12179
This suggest that we can trigger a flag in Hive where spark-assembly is loaded only if HIVE_ADD_SPARK_ASSEMBLY = "true".
https://community.hortonworks.com/questions/34311/warning-message-in-hive-output-after-upgrading-to.html :
Although there is a workaround if to avoid any end time changes and that is to manually remove the 2 lines from the end of the files using shell script.
Have tried to set HIVE_ADD_SPARK_ASSEMBLY=false, but it didn't work.
Finally, I found a post question at Cloudera community. See: https://community.cloudera.com/t5/Support-Questions/Warning-message-in-Hive-output-after-upgrading-to-hive/td-p/157141
You could try the follow command, it works for me!
hive -S -d ns=$hiveDB -d tab=$t -d dunsCol=$c1 -d phase="$ph1" -d error=$c2 -d ts=$eColumnArray -d reporting_window=$rDate -f $dir'select_count.hsql' | grep -v "^WARN" > $gOutPut 2> /dev/null
I would want to validate the number of rows copied after a Sqoop import task. I know this can be accomplished using Sqoop's --validate option after specifying a table with option --table. In my task, i am using the free form query option (--query) instead of table option(--table) and when i provide the option --validate it does not work.
Example:
sqoop import --connect abc.com --table test --validate --> works
sqoop import --connect abc.com --query "select * from test where /$CONDITIONS" --validate -->does not work
Please help
Please look up the definition of sqoop validate. When you run the command, you will see it is not supported:
Run:
sqoop import --where "INTRVL_DT = To_Date" ... --as-textfile --validate
We get output:
Validation is not supported for where clause but single table only.
I tried tackling this issue here:
Validate a Sqoop with use of QUERY and WHERE clauses
You can reach out to me, and we can work on it.