Sorting data placed in isolated storage - windows-phone-7

I know how to store and retrieve data using isolated storage. My problem is how to sort the data I want to recover from that has already been stored previously. All my data is stored in a single file.
The user stores the data everyday and maybe on a particular date he makes two entries and none on some other day. In that case how should I search for the particular days info I need.
And could you also explain how data is stored in the isolated storage that is in packets of data or some other way? If I store two data sets in same file, does it automatically shift to a next line for storing other data or do I have to specify for it to do so?
Moreover, if I want to save data in the same line, does it automatically separate the data in a line by some tab or inserting some character in between two data sets in a line or does the developer have to take care of this?

If the date is saved in the title of the file then you can use the following code to search for it:
var appStorage = IsolatedStorageFile.GetUserStoreForApplication();
string date = appStorage.GetFileNames("the date you are looking for")
In answer to your other question if you use .Write() when writing text to a file, then it will create one long stream of data, if you use .WriteLine() then you will write the information, then a new line reference will be added at the end.
I'm not sure as you weren't very clear, but just in-case, here is a general procedure for reading files from IsolatedStorage:
var appStorage = IsolatedStorageFile.GetUserStoreForApplication();
using (StreamReader reader = new StreamReader(appStorage.OpenFile(fileName, FileMode.Open, FileAccess.Read)))
{
fileContent = reader.ReadToEnd();
}

Related

Extract data from webpage to Excel

I tried to automate this portal, but since I have a trouble due to new to UiPath.
This is a URL
Have to extract CompanyName,BrokerName,Address,Phone into Excel for a number of records as per user input.
Since that client data is in one element and separated by breaks (br) I would suggest to still use the Scrape Data feature, (pick the first and second data set-group) and pull in the data set as-is; so its in block format separated by new lines.
Then iterate through the results, do a split string array on the results, iterate through the string array and evaluate each line using regex. If an address match or email match or phone..etc.. Then handle it from there, You could dump the results into a temp data table and then dump the results into excel.
Granted there might need to be some fluff on your regexpressions and it might miss a few, but it would be a good start.
Hope that helps get you started

How to read an excel sheet and put the cell value within different text fields through UiPath?

How to read an excel sheet and put the cell value within different text fields through UiPath?
I have a excel sheet as follows:
I have read the excel contents and to iterate over the contents later I have stored the contents in a Output Data Table as follows:
Read Range - Output:
DataTable: CVdatatable
Output Data Table
DataTable: CVdatatable
Text: opCVdatatable
Screenshot:
Finally, I want to read the text opCVdatatable in a iteration and write them into text fields. So in the desired Input fileds I mentioned opCVdatatable or opCVdatatable+ "[k(enter)]" as required.
Screenshot:
But UiPath seems to start from the begining of the Output Data Table whenever I called for opCVdatatable.
Inshort, each desired Input fileds are iteratively getting filled up by all the data with the data stored in the Output Data Table.
Can someone help me out please?
My first recommendation is to use Workbook: Read range activity to read data from Excel because it is quicker, works in the background, and does not require excel to be installed on the system.
Start your sequence like this (note the add headers property is not checked):
You do not need to use Output Data Table because this activity outputs a string containing all row items. What you want to do instead is to access the items in the data table and output each one as a string in your type into, e.g., CVDatatable.Rows(0).Item(0).ToString, like so:
You mention you want to read the text opCVdatatable in an iteration and write them into text fields. This is a little bit more complex, but i'll give you an example. You can use a For Each Row activity and loop through each row in CVDatatable, setting the index property if required. See below:
The challenge is to get the selector correct here and make it dynamic, so that it targets a different text field per iteration. The selector for the type into activity will depend on the system you are targeting, but here is an example:
And the selector for this:
Also, here is a working XAML file for you to test.
Hope this helps.
Chris
Here's a different, more general approach. Instead of including the target in the process itself, the Excel would be modified to include parts of a selector:
Note that column B now contains an identifier, and this ID depends on the application you will be working with. For example, here's my sample app looks like. As you can see, the first text box has an id of 585, the second one is 586, and so on (note that you can work with any kind of identifier including the control's name if exposed to UiPath):
Now, instead of adding multiple Type Into elements to your workflow, you would add just a single one, loop over each of the datatable's row, and then create a dynamic selector:
In my case the selector for the Type Into activity looks as follows:
"<wnd cls='#32770' title='General' /><wnd ctrlid='" + row(1).ToString() + "' />"
This will allow you to maintain the process from the Excel sheet alone - if there's a new field that needs to be mapped, just add it to your sheet. No changes to the Workflow are required.

Combine csv files with different structures over time

I am here to ask you a hypothetical question.
Part of my current job consists of creating and updating dashboards. Most dashboards have to be updated everyday.
I've created a PowerBI dashboard from data linked to a folder filled with csv files. I did some queries to edit some things. So, everyday, I download a csv file from a client's web application and add the said file to the linked folder, everything gets updated automatically and all the queries created are applied.
Hypothetical scenario: my client changes the csv structure (e.g. column order, a few column name). How can I deal with this so I can keep my merged csv files table updated?
My guess would be to put the files with the new structure in a different folder, apply new queries so the table structures match, then append queries so I have a single table of data.
Is there a better way?
Thanks in advance.
Say I have some CSVs (all in the same folder) that I need to append/combine into a single Excel table, but:
the column order varies in some CSVs,
and the headers in some CSVs are different (for whatever reason) and need changing/renaming.
First CSV:
a,c,e,d,b
1,1,1,1,1
2,2,2,2,2
3,3,3,3,3
Second CSV:
ALPHA,b,c,d,e
4,4,4,4,4
5,5,5,5,5
6,6,6,6,6
Third CSV:
a,b,charlie,d,e
7,7,7,7,7
8,8,8,8,8
9,9,9,9,9
10,10,10,10,10
If the parent folder (containing my CSVs) is at "C:\Users\user\Desktop\changing csvs\csvs", then this M code should help me achieve what I need:
let
renameMap = [ALPHA = "a", charlie = "c"],
filesInFolder = Folder.Files("C:\Users\user\Desktop\changing csvs\csvs"),
binaryToCSV = Table.AddColumn(filesInFolder, "CSVs", each
let
csv = Csv.Document([Content], [Delimiter = ",", Encoding = 65001, QuoteStyle = QuoteStyle.Csv]),
promoteHeaders = Table.PromoteHeaders(csv, [PromoteAllScalars = true]),
headers = Table.ColumnNames(promoteHeaders),
newHeaders = List.Transform(headers, each Record.FieldOrDefault(renameMap, _, _)),
renameHeaders = Table.RenameColumns(promoteHeaders, List.Zip({headers, newHeaders}))
in
renameHeaders
),
append = Table.Combine(binaryToCSV[CSVs])
in
append
You'd need to change the folder path in the code to whatever it is on your system.
Regarding this line renameMap = [ALPHA = "a", charlie = "c"],, I needed to change "ALPHA" to "a" and "charlie" to "c" in my case, but you'd need to replace with whatever columns need renaming in your case. (Add however many headers you need to rename.)
This line append = Table.Combine(binaryToCSV[CSVs]) will append the tables to one another (to give you one table). It should automatically handle differences in column order. If there any rogue columns (e.g. say there was a column f in one of my CSVs that I didn't notice), then my final table will contain a column f, albeit with some nulls/blanks -- which is why it's important all renaming has been done before that line.
Once combined, you can obviously do whatever else needs doing to the table.
Try it to see if it works in your case.

Spring Batch - Non row based data structure file reader

I have a file which is not row based data. It's a text file which contains multiple tables. In order to create one Item I need to take data from several lines. Actually in my case one file is a one record for me, which I have to extract data from several lines to populate item object.
Example:
class DataItem {
private String price;
private String quantity;
//Getters and setters
}
Input File is like:
Price Data
========================
Price : 150$
Quantity: 4000
-------------------------
As given above I need to parse several lines from the file in order to make a one database record (one Item Read).
How can I achieve this using Spring batch?
The FlatFileItemReader has a couple extension points that will be useful for you. The RecordSeparatorPolicy and the LineMapper.
RecordSeparatorPolicy
The RecordSeparatorPolicy indicates to the reader once a full record has been read. It's RecordSeparatorPolicy#isEndOfRecord(String record) takes the current String that has been read and returns true if it represents a full record and false if not. In your case, you'll want to develop one that returns true once one of the tables has been completely read in.
LineMapper
The LineMapper is a strategy interface that allows the String that represents a record to be mapped to an item. Simple cases can be addressed by the DefaultLineMapper which takes a single String, tokenizes it into a set of tokens via the LineTokenizer (represented by a FieldSet...a object similar to a ResultSet only for files), and passes it to a FieldSetMapper that takes the FieldSet and maps the tokens to the item to be returned. You can either implement your own LineMapper or you may be able to just implement a LineTokenizer and use the rest of the out of the box components.
With both of these two extension points, I'd expect you to be able to map that data with Spring Batch in a pretty straight forward manor.

Saving data in eXist-db

I'm new in eXist-db. What I want to do is to store LARGE amount of data in XML format into a native XML database for fast processing (searching/updating/etc.) But unfortunately, the documentation provided doesn't explain clearly on how to save/modify data into a persistent database (or back to XML files).
Below is roughly what I want to do in eXide. The lines that I don't know how to do are commented in questions Q1, Q2, and Q3:
xquery version "3.0";
let $data := doc('file:///c:/eXist/database.xml')
let $newdata := doc('file:///c:/import/newdata.xml')
(: Q1. How to do merging of data like below? :)
update insert $newdata into $data
(: Q2. How to save the changes back to database.xml? :)
doc('file:///c:/eXist/database.xml') := $data
let $result := <result>
{
for $t in $data/book/title
where $t/../publisher = 'XYZ Company'
return $t
}
</result>
(: Q3 How to save query result to a new file? :)
doc('file:///c:/export/XYZ Company Report.xml') := $result
Thanks in advance.
Your doc() functions all point to files on your file system, which indicates a misunderstanding about how one works with XML data with eXist-db. When working with eXist-db, though, all manipulations of XML data happen in the database. Thus, you need to first store the XML data in the database, and then you can perform your manipulations. Then, if needed, you can serialize the data back out onto the file system.
To store data into the database, see Getting Data into eXist-db.
To merge data, see XQuery Update Extensions.
To serialize the data back out onto the file system, see the function documentation for file:serialize().

Resources