I am fairly new to selenium ruby/rspec scripting and I have come across a use case that requires data to be pulled from a csv or xlsx file in the test. Any help or suggestions on how to approach this would be greatly appreciated.
The test would pull from the csv file and input data from each row to complete the same actions against. This particular file contains a single column of "id's" and the same action would need to be repeated until all values from the column have been used. Here is the basic steps...
User logs in
User pulls first value (id) from file to search in text field
User completes action against this id
User returns to text field and pulls value from second row of same file
User completes action against this id
This would repeat until all rows are completed
Is this possible to complete the same method repeatedly but filtering through data from the file?
How would you pull the csv file into the script and specifically grab the first value (and subsequent values) throughout?
I know this is pretty vague but I have not seen any examples such as this in SO and researching online. Any suggestions or examples would very much appreciated.
I'm not sure I fully understand your question but I can try to point you in the right direction.
Ruby has a csv parser built in which you can read about here: CVS class
One of the first examples would seem to provide you with the functinoality you are looking for. I think in your example you would want to do something like:
CSV.foreach("path/to/file.csv") do |row|
#I'm assuming ID is the first element in each row
id = row.first
method_that_does_stuff(id)
# assert that stuff has happened ...
end
does that help get you started?
I am a novice Go lang programmer,trying to learn Go lang features.I wanted to split a large csv file into multiple files in GO lang, each file containing the header.How do i do this? I have searched everywhere but couldnt get the right solution.Any help in this regard will be greatly appreciated.
Also please suggest me a good book for reference.
Thanking You
Depending on your shell fu this problem might be better suited for common shell utilities but you specifically mentioned go.
Let's think through the problem.
How big is this csv file? Are we talking 100 lines or is it 5G ?
If it's smallish I typically use this:
http://golang.org/pkg/io/ioutil/#ReadFile
However, this package also exists:
http://golang.org/pkg/encoding/csv/
Regardless - let's return to the abstraction of the problem. You have a header (which is the first line) and then the rest of the document.
So what we probably want to do (if ignoring csv for the moment) is to read in our file.
Then we want to split the file body by all the newlines in it.
You can use this to do so:
http://golang.org/pkg/strings/#Split
You didn't mention but do you know how many files you want to split by or would you rather split by the line count or byte count? What's the actual limitation here?
Generally it's not going to be file count but if we pretend it is we simply want to divide our line count by our expected file count to give lines/file.
Now we can take slices of the appropriate size and write the file back out via:
http://golang.org/pkg/io/ioutil/#WriteFile
A trick I use sometime to help think me threw these things is to write down our mission statement.
"I want to split a large csv file into multiple files in go"
Then I start breaking that up into pieces but take the divide/conquer approach - don't try to solve the entire problem in one go - just break it up to where you can think about it.
Also - make gratiutious use of pseudo-code until you can comfortably write the real code itself. Sometimes it helps to just write a short comment inline with how you think the code should flow and then get it down to the smallest portion that you can code and work from there.
By the way - many of the golang.org packages have example links where you can literally run in your browser the example code and cut/paste that to your own local environment.
Also, I know I'll catch some haters with this - but as for books - imo - you are going to learn a lot faster just by trying to get things working rather than reading. Action trumps passivity always. Don't be afraid to fail.
Here is a package that might help. You can set a necessary chunk size in bytes and a file will be split on an appropriate amount of chunks.
We have been programming an application for the next two weeks to make a valid csv file to import to Magento.
But, we have a problem with importing in general, as we get the error that Magento can't find the required columns: sku. I've been looking through a lot of forums.
I have seen it could be the visibility but we have that in our csv. I will give you an example of how our csv looks like:
sku,name,ean,manufacturer,price,msrp,tax_class_id,qty,_category,is_in_stock,status,description,_type,visibility,_attribute_set,color,geluidssysteem,platform_consoles,protection,connection,kabel_lengte,lader,nintendo_platform,model,megapixels,geschikt_foto_video_tas,schermdiagonaal,size,keyboard_layout,geheugen,draagstijl_headset,materiaal,type_camera,type_toetsen,left_right_handed,vermogen,toetesenbord_verlicht,sensorkeuze,stroom_voorziening,connection_mouse,
MRM-01855,AA FUSION AUDIO 3.5mm to 3.5mm Jack kabel 1 meter wit,5060166512163,Advanced Accessories,3.18,,2,6,Nintendo/Nintendo bundels,1,1,Boomsjors,simple,4,PC kabels,Green,,,,Universal,1.8 Meter,,,,,,,,,,,,,,,120w,,,,,
We also had the problem that the description contains a comma and then messes up our csv.
If you need any more information, let me know!
Please make sure you are using the utf-8 encoded file format for your .csv file. This error occurs mostly with wrong formated file content.
To make it correct please open your .csv file in editor and go to 'Save As'-> Select File Type as 'MSDOS CSV(.csv)' and save.
Hope this will help...
I want to copy some specific texts from internet browser(chrome) and want to paste them in proper fields of Microsoft word.. Let me explain what I want exactly... I have this kind of page structure in chrome-
Name-Deepak,Raju,Jhon,Robert.......
Salary-200,254,673,953...
Phone-987535747,856889479,64688539,357954228....
Etc..
I have a table in MS word as-
Sl. Phone. Name. Salary.
Can I make a auto copy paste program to make my table-
Sl. Phone. Name. Salary
1. 987535747. Deepak. 200
2. .......
Like this? Suggest me the best suitable platform to compile this.. Its best for me, if a bat file can do the job.. I know bit odd question.. And I should not ask the entire program,rather a section of it..Bt still....... actually I don't know from where to start..
Rather than use a wget which will only retrieve the document, what you want is a way of parsing the results of the web content and writing into an output file.
After searching the web, I could only come across
lynx which
is a text based browser and you can parse the -dump parameter to
output the text into file which you can then write a script to do
the final bit.
Also take a look at this
link
for more info on switches you can use most especially if the desired
text has links in it (-nolist)
elinks which is an advanced text based browser
I'm trying to accomplish something that will let a user download a file from a web application onto their system. The file will contain a unique five digit code. Using this unique five digit code the users can search for a file in their file system.
I'm wondering where is the best place to put this five digit code in a file so that users can easily search for the file. The simplest approach would be to put it in the name of the file, however, users can change the name of the file easily.
I'm looking for a filed where I can put the code so that users won't be able to modify it but will still be able to search for it. Is this possible?
If you say File.. what kind of file format do you mean. I'm asking because a file is just a pile of bytes and you can append your 5 digit code every where in the file, if it is your own file format. But if you tell us which file format you use, probably there are some fields which can be used to search for it. As example Tiff has many tags. Images have other meta data. etc