This is my first time using Ruby. I'm writing an application that parses data and performs some calculations based on it, the source of which is a JSON file. I'm aware I can use JSON.parse() here but I'm trying to write my program so that it will work with other sources of data. Is there a clear cut way of doing this? Thank you.
When your source file is JSON then use JSON.parse. Do not implement a JSON parser on your own. If the source file is a CSV, then use the CSV class.
When your application should be able to read multiple different formats then just add one Reader class for each data type, like JSONReader, CSVReader, etc. And then decide depending on the file extension which reader to use to read the file.
Related
I'm using NiFi 1.11.4 to read CSV files from an SFTP, do a few transformations and then drop them off on GCS. Some of the files contain no content, only a header line. During my transformations I convert the files to the AVRO format, but when converting back to CSV no file output is produced for the files where the content is empty.
I have the following settings for the Processor:
And for the Controller:
I did find the following topic: How to use ConvertRecord and CSVRecordSetWriter to output header (with no data) in Apache NiFi? but in the comments it mentions explicitly that ConvertRecord should cover this since 1.8. Sadly I understood it incorrectly, it does not seem to work or my setup is wrong.
While I could make it work with by explicitly writing the schema as a line to empty files, I wanted to know if there is also a more elegant way?
I want to parse a huge json with more accuracy and less code implementation. Is there any possibility to implement.i am able to parse a huge file into the buffer but when I try to access values of json I need to do individually for all the keys and values. Will there any possibility to retrieve values of objects only when needed. Please provide my example for parsing huge json file and using values
I want to convert input CSV file to XML file using ESQL in IIB v10. Can you please help me with the ESQL code to achieve the same. I've provided the Input CSV file sample and Output XML file sample as below:
Input CSV file
Output XML file
Your question is fundamentally wrong. Using ESQL only to do it on Integration Bus is like using a knife to cut down a tree (when you have the choice with a chainsaw). If you want to convert a csv file to an xml, the proper solution is the following :
1) Define a new DFDL schema to parse the CSV file
2) Define your xsd for the output XML
3) Use the DFDL parser when you read the CSV, and use the structure you created (on the fileInput node for example, I don't know your exact case)
4) Use a mapping node to map from your DFDL structure to your XML structure (defined in the xsd)
Note : the last step can be done with alternatives solution, like compute Nodes (ESQL, Java, C#, php).
If you have any additional questions, feel free to contact me
Is there a utility that allows me to loop through a CSV file giving me the ability to send multiple statements to an LRS? My CSV file would have the necessary Actor, Verb, Object information in it all separated by commas.
Thank you.
If you're looking for an existing commercial utility, Watershed LRS has exactly that functionality; you can upload a csv and it generates statements.
I'm sure you could create something fairly quickly in your chosen programming language using one of the Tin Can code libraries too.
How would I create a list of elements in VB.NET, save it to a .dat file, and make Ruby re-create such list (as an array) with such elements (they will be strings, booleans and integers)?
You can do it, but you'd need to find some representation for it. The easiest is probably JSON, so you would
make the data structure in VB
write it to JSON as a file
read the JSON file using Ruby.
Here's a JSON serializer for .Net:
A .dat file is just a binary blob, 'tis it not? If there's any particular format you use you could easily translate that to equivalent Ruby code. Just as long as the knowledge is duplicated on both ends, though that leads to a violation of the DRY principle. JSON might be a good intermediate representation (as noted by #Charlie Martin) because it's a plain text format and you can always add compression.