Get multiple error against rejected row in Vertica - vertica

I'm using vertica for data uploading in my application. User uploads files and views data that is rejected along with the messages. The problem is, if a row contains multiple errors against multiple columns, it discards other validations and gives the error against only the first column.
Is there a way to get ; separated error messages for each error in a row in vertica?

Related

How to use putSQL in apache nifi

I a beginner in data warehousing and apache nifi. I was trying taking the Mysql table data into nifi and then want to put that data into another mysql database table, I am successfully getting data from the database table one and I can also able to print that data into file using putFile processor.
But now I want to store that queued data into mysql database table, I know there is putSQL processor but it was not working for me.
Can anyone let me know how to do it correctly.
Here are the screenshots of my flow
PutSQL configuration-
I converted data from Avro to JSON and then JSON to SQL in case if that would work, but this also not worked.
Use PutDatabaseRecord and remove the Convert* processors.
From nifi docs:
The PutDatabaseRecord processor uses a specified RecordReader to input
(possibly multiple) records from an incoming flow file. These records
are translated to SQL statements and executed as a single transaction.
If any errors occur, the flow file is routed to failure or retry, and
if the records are transmitted successfully, the incoming flow file is
routed to success. The type of statement executed by the processor is
specified via the Statement Type property, which accepts some
hard-coded values such as INSERT, UPDATE, and DELETE, as well as 'Use
statement.type Attribute', which causes the processor to get the
statement type from a flow file attribute. IMPORTANT: If the Statement
Type is UPDATE, then the incoming records must not alter the value(s)
of the primary keys (or user-specified Update Keys). If such records
are encountered, the UPDATE statement issued to the database may do
nothing (if no existing records with the new primary key values are
found), or could inadvertently corrupt the existing data (by changing
records for which the new values of the primary keys exist).
This should be more performant and cleaner.

ORA-29285: file write error

I'm trying to extract data from an Oracle table. I'm using utl file for that and I'm receiving the error ORA-29285: file write error. The weird here is if I try extract the data directly from the table return the error, if I extract the data using a simple view the error is returned as well, BUT if I extract the data using a view with an ORDER BY the extraction is well succeed. I can't understand where the error is, I already look for the length of lines and nothing. Any suggestion from which can be?
I extract a lot of other data through the utl_file and I'm well succed. This data in specific is at the first time uploaded to Oracle table directly from a csv file with ANSI encoding. However I have other data uploaded by the same way and then I can export correctly. I checked the encoding too in order to reduce the possible mistakes and I found nothing.
Many thanks,
Priscila Ferreira

Checksum doesn't match: corrupted data.: while reading column `cid` at /opt/clickhouse//data/click

I am using clickhouse to store data, and I'm getting the following error while querying the column cid from the click table.
Checksum doesn't match: corrupted data.
I don't have any replicate for now, any suggestions for recovery?
The error comes down to the fact the checksum of the CityHash128 and the compressed data doesn't match and throws this exception in the readCompressedData function.
You can try to disable this check using the disable_checksum via the disableChecksumming method.
It could work, but a corrupted most probably means that something is wrong with your raw data and there is small chances for recovery unless you did backups.
Usually, you will get data part name and column name in exception message.
You could locate specific data part, remove files related to that single column, and restart the server. You will lose (already corrupted) data for one column in one data part (it will be filled with default values on read), but all other data will remain.

COGNOS error: 'sqlOpenResult' status = '-237'

I am trying to devise a crosstab report for COGNOS. I am joining a main query to a dimension query so that all crosstabs will nicely line up with one another. When I ask to view the tabular data for either the main query or the dimension query, it displays fine. The main query as about 10,000 records and the dimension query 70 records.
However, when I run the report as normal, or view the tabular data for the crosstab query, I get an RQP-DEF-0177 error.
An error occurred while performing operation 'sqlOpenResult' status='-237'.
and the error details begin:
UDA-SOR-0005 Unable to write the file.RSV-SRV-0042 Trace back:RSReportService.cpp(724)
followed by a bunch of QFException clauses.
How could I fix this problem?
This is due to insufficient disk space for Temporary files. Check IBM Cognos configuration -> Environment -> Temporary files location.
When a report is running a UDA file is created inside the temp folder and disk may not have enough space to for UDA file to expand. By increasing the disk space where the temp folder is located will resolve this issue.
This is generic error, most of the times cognos throws this error, if you want go in details go to /logs/cogserver.log , you will find more details there.

IBM BigSheets Issue

I am getting some error in loading my files onto big sheets both directly from the HDFS( files that are output of pig scripts) and also raw data that is lying on the local hard disk.
I have observed that whenever I am loading the files and issuing a row count to see if all data is loaded into bigsheets, then I see lesses number of rows being loaded.
I have checked that the files are consistent and proper delimeters(/t or comma separated fields).
Size of my file is around 2GB and I have used either of the format *.csv/ *.tsv.
Also in some cases when i have tired to load a file from windows os directly then the files sometimes load successfully with row count matching with actual number of lines in the data, and then sometimes with lesser number of rowcount.
Even sometimes when a fresh file being used 1st time it gives the correct result but if I do the same operation next time some rows are missing.
Kindly share your experience your bigsheets, solution to any such problems where the entire data is not being loaded etc. Thanks in advance
The data that you originally load into BigSheets is only a subset. You have to run the sheet to get it on the full dataset.
http://www-01.ibm.com/support/knowledgecenter/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.analyze.doc/doc/t0057547.html?lang=en

Resources