Is there a provision for converting csv file to xls using shell scripting ?
Other answers as mentioned below as it shows all the data with a ',' in a single cell of xls.
cp test.csv test.xls
Other answers as mentioned below as it shows all the data with a ',' in a single cell of xls.
I would say something is wrong with your csv then. I have used csv many a times created via some bash script and it opens properly in the MS Excel as well as in OpenOffice SpreadSheet.
BTW, you don't have to run that cp command. Just Open CSV into MS Excel and it will display it properly. MS excel has full support for CSVs.
If you are still facing this issue, then provide sample data to work with.
You could use ssconvert from the gnumeric package:
ssconvert data.csv data.xlsx
Related
I am currently trying to convert a simple table into a PDF file using an existing .rdf file.
My first approach was to look for a new program that can do so because I want to replace the current 'Oracle Reports' program.
Is there any other program that would support converting SQL data into an PDF using an .rdf File?
I tried writing a Python 3 script to do just that, but I would not know where to start.
Oracle APEX 21.2 (latest at the current time) has a package named APEX_DATA_EXPORT that can take a SELECT statement and export it into various formats, one of them being PDF. The example in the documentation shows how to generate a PDF from a simple query. After calling apex_data_export.export, you can use the BLOB that is returned by the function and do whatever you need with the PDF.
There are not very many options for styling and formatting the table, but Oracle does plan on adding additional printing capabilities for PDFs in the future.
I'm building a project on JMeter and I would like to read an Excel file with CSV Data Set Config to avoid to use Groovy to read it.
Do you know if it is possible? If not, any other JMeter element can help me to read Excel file row by row?
Many thanks in advance,
Best regards,
CSV Data Set Config is only able to read text files, as per Wikipedia:
A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values.
If your data file is a binary file like .xls or .xslx unfortunately CSV Data Set Config won't help, you have the following options:
Export .xls or .xlsx file to CSV using MS Excel or equivalent
If for any reason you cannot use point 1 you can go for Apache POI libraries to read the Excel file formats in JSR223 Test Elements like it's described in How to Implement Data Driven Testing in your JMeter Test article.
You can also see Busy Developers' Guide to HSSF and XSSF Features for some code snippets for popular user scenarios when it comes to reading/writing data from/to Excel files
When trying to load csv file into Oracle table through ODI, ODI is not able to fetch the data from csv file. The csv file format is an issue here with the data in a single line. But when we are opening the csv file through excel and then saving it as csv the format is changing and the data is getting arranged properly and then we are able to import it through ODI.
Problem is we need to import the original csv file whatever format it is. Is there a possibility of doing the same?
SQL Loader will be the first thing that has came to my mind. I use this a lot.
SQL Developer will be a better option if you dont want to work with command line utilities.
Try using External Tables...you can configure how the CSV should be read in the EXTERNAL TABLE configuration
Am maintaining the data in different sheets in a csv file,but now i wanted to read this through jmeter .
i know how to read single csv file in jmeter , so need help to read different sheets in a single csv file.
Can anyone please help to get solution for this ?
Jmeter only reads CSV so you would need to save each sheet as CSV.
You could try otherwise with a setup Thread group and custom code (Beanshell or JSR223) that takes this excel and extracts each sheet into a CSV file.
CSV file is not created for that : CSV file should contains only one sheet.
As #PMD UBIK-INGENIERIE suggests, export every sheet to an other csv file.
I'm trying to import data from a csv file which, unfortunately, contains multiple data tables. Actually, it's not really a pure csv file.
It contains a header field with some metadata and then the actual csv data parts are separated by:
//-------------
Table <table_nr>;;;;
An example file looks as follows:
Summary;;
Reporting Date;29/05/2013;12:36:18
Report Name;xyz
Reporting Period From;20/05/2013;00:00:00
Reporting Period To;26/05/2013;23:59:59
//-------------
Table 1;;;;
header1;header2;header3;header4;header5
string_aw;0;0;0;0
string_ax;1;1;1;0
string_ay;1;2;0;1
string_az;0;0;0;0
TOTAL;2;3;1;1
//-------------
Table 2;;;
header1;header2;header3;header4
string_bv;2;2;2
string_bw;3;2;3
string_bx;1;1;1
string_by;1;1;1
string_bz;0;0;0
What would be the best way to process load such data using kettle?
Is there a way to split this file into the header and csv data parts and then process each of them as separate inputs?
Thanks in advance for any hints and tips.
Best,
Haes.
I don't think there are any steps that will really help you with data in such a format. You probably need to do some preprocessing before bringing your data into a CSV step. You could still do this in your job, though, by calling out to the shell and executing a command there first, like maybe an awk script to split up the file into its component files and then load those files via the normal Kettle pattern.