Convert Markdown table from SAP to Excel? - power-automate

I have a text table in a .txt file in near-markdown type layout on a daily SAP export and emailed and intercepted by Power Automate, similar to this:
-------------------------------------------------------------------------------------------
|PLnt |MRPCn|Material |Material Description |Pln stock|Total stck|
-------------------------------------------------------------------------------------------
|0201 |F4 |202011 |Hyaluronic Acid No. 43 | 0 | 0 |
|0201 |F4 |202104 |Blue 40 NGSTE (Post) | 47 | 47 |
|0201 |F4 |202203 |Nitroglycerine 44 | 49 | 49 |
By near-markdown, I mean that if I put this in the markdown converter online at tableconvert.com it mostly unpacks it into csv but leaves the dashed lines. I don't know what format it really is.
I need to use power-automate to intercept the email with the text file, convert it to a pretty excel, run scripts on that excel (eventually), and then send it on its way.
Is this SAP specific markdown format or is it some other standard? Is there a non manual way to do this? Flow doesn't seem to have any built in methods that I can find. My current plan is to break the txt file into an array and loop through it using the | as a separator and to trim the spaces.
I know there is a way export and email excel SAP with ABAP, but unfortunately I don't have access to that option.

Related

How to define steps for Scenario Outline with ruby

I have a registration form and need to test it using cucumber and ruby.
I decided to user Scenario Outline with different values in table:
Scenario Outline: Log in with valid data
Given I am on the Sign up Form
When I provide <Email>
And I provide Confirm <СEmail>
And I provide <Password>
And I provide Confirm <СPassword>
And I click on Register button
Then I registered to the site
Examples:
| Email | CEmail | Password | CPassword |
| vip17041#yopmail.com |vip17041#yopmail.com | 123 | 123 |
| vip17042#yopmail.com |vip17042#yopmail.com |123 | 123 |
Now I need create steps definition. In step definition I need to put into the fields values from the table.
How could I do that? Previously I used the next method:
When(/^I provide vip(\d+)#yopmail\.com$/) do |email|
browser.text_field(:name, "Email").set("email#yopmail.com")
But how could I set instead of hard coded email - email from my table?
Thanks
If you're looking to merge the capture with the email address:
When(/^I provide (vip\d+)#yopmail\.com$/) do |email|
browser.text_field(:name, "Email").set("#{email}#yopmail.com")
end
This will concatenate the captured text ("vip" literally plus any number of numerical values with a length of one or more) with the string "#yopmail.com"
A note on how Scenario Outline works
Scenario Outline will grab the lines from the examples table, and simply use the columns to create individual scenarios that use the values that match the column headers in place of the placeholders.
For instance:
Scenario Outline: A note
Given I am logged in as <user>
When I go to the homepage
Then I should see "Welcome Back, <display_name>"
Examples:
| user | display_name |
| rick#stley.com | Rick Astley |
| tammy1992 | Tammy Holmes |
Would be converted into two scenarios:
Scenario: A note
Given I am logged in as rick#stley.com
When I go to the homepage
Then I should see "Welcome Back, Rick Astley"
Scenario: A note
Given I am logged in as tammy1992
When I go to the homepage
Then I should see "Welcome Back, Tammy Holmes"
Which makes it no different to writing your normal scenario, the place holders that you use simply complete the step that you are writing.
How I would write your Scenario
Cucumber is a tool meant to bridge the conversational gap between testers, developers and management.
Scenario Outline: Log in with valid data
Given I am on the Sign up Form
When I sign up with the email "<Email>" and password "<Password>"
Then I should be able to log in as "<Email>" with password "<Password>"
Examples:
| Email | Password |
| vip17041#yopmail.com | 123 |
| vip17042#yopmail.com | 123 |
We don't necessarily have to know each individual step of the process, and the feature file shows the intent of the test.
What this seems to be looking for is whether you can log in after registering a new account, so why not write it as such?
As someone who has used Cucumber for many years I would advise you to avoid using Scenario Outlines. Features and scenarios are for expressing intent in simple clear terms, not programming things using tables.
You can write your scenario as
Scenario: I should be welcomed when I sign in
Given I am registered
When I sign in
Then I should be welcomed
Good scenarios state what behaviour they are trying to verify in their title, and then have steps that are consistent with this behaviour. They have no need to explain HOW your application implements that behaviour. Putting that information in your scenarios makes them longer, harder to implement, and much more difficult to maintain.
A side effect of such simple scenarios is that the step definitions are much simpler and easier to write. No regex's params or table parsing needed here.
You can see a simple example of this approach here (https://github.com/diabolo/cuke_up/tree/master/features),

Can I export descriptive test names in Selenium / Gherkin / Cucumber?

I have a few tests in feature files that use the Scenario Template method to plug in multiple parameters. For example:
#MaskSelection
Scenario Template: The Mask Guide Is Available
Given the patient is using "<browser>"
And A patient registered and logged in
And A patient selected the mask
When the patient clicks on the "<guide>"
Then the patient should see the "<guide>" guide for the mask with "<guideLength>" slides
Examples:
| browser | guide | guideName | guideLength |
| chrome | mask | Mask | 5 |
| firefox | replacement | Mask Replacement Guide | 6 |
| internetexplorer | replacement | Mask Replacement Guide | 6 |
Currently, this is exporting test results with names like "TheMaskGuideIsAvailableVariant3". Is there any way to have it instead export something like "TheMaskGuideIsAvailable("chrome", "mask", "Mask", "5")"? I have a few tests which export 50+ results, and it's a pain to count the list to figure out exactly which set of parameters failed. I could have sworn the export used to work like this at one time, but I can't seem to replicate that behavior.
Possibly tied to it, recently, I've lost the ability to double-click on the test instance in Test Explorer in Visual Studio and go to the test outline in its file. Instead, nothing happens and I have to manually go to that file.
The answer to the Variant situation is that the part that gets appended is the first column of the table. If there are non-unique items in the first column, it gets exported as numbered "Variants".
The answer I found to exporting the list is to use vstest.console with the "/ListTests" option. As per the prior paragraph, since the first column is the one to be used for naming, a column can be established with a concatenated list of parameters.

How to fetch all records using NCBI Batch Entrez

I have over 200,000 accessions in a flat file, which need to retrieve relevant entry from NBCI.
I use Batch Entrez (http://www.ncbi.nlm.nih.gov/sites/batchentrez) to do the job. But encountered several problems:
The initial file was splitted into multiple sub-files, each containing 4000 lines. But it seems Batch Entrez has some size limitation on the returned file. For example: if the first 1000 accessions all have tens of thousands lines which reach the size limitation, then the rest 3000 accessions will be rejected and won't be searched.
One possible solution in my head is to split the file into more sub-files and search individually. However this requires too much manual effort.
So I am just wondering if there is any other solution, or any code could be used.
Thanks in advance
Your problem sounds a good fit for a Bio-star toolkit. This is a solution using BioSmalltalk
| giList gbReader |
giList := (BioObject openFullFileNamed: 'd:\Batch_entrez_1.txt') contents lines.
gbReader := BioNCBIGenBankReader new.
gbReader
genBankRecordsFrom: 'nuccore'
format: #setModeXML
uids: giList.
(BioGBSeqCollection newFromXMLCollection: gbReader searchResults)
collect: [: e | BioParser
tokenizeNcbiXmlBlast: e contents
nodes: #('GBAuthor' 'GBSeq_definition') ]
To execute/debug the script, just select it and a right-click will open the Smalltalk world-menu.
The API automatically split and fetch your accession list (in the script contained in Batch_entrez_1.txt) maintaining the NCBI Entrez post limits to avoid penalities.
The result format is XML (which is an "easy" format to parse or filter specific fields) although it could be any of the retrieval modes supported by Entrez, for example setting #setModeText will answer an ASN.1 representation. Replace 'nuccore' for the database you want to query. Finally choose the interesting fields, in the script I have choosed 'GBAuthor' and 'GBSeq_definition', but you are free to choose anyone of the available nodes.

Cucumber load data tables dynamically

I am currently trying to use cucumber together with capybara for some integration tests of a web-app.
There is one test where I just want to click through all (or most of) the pages of the web app and see if no error is returned. I want to be able to see afterwards which pages are not working.
I think that Scenario outlines would be the best approach so I started in that way:
Scenario Outline: Checking all pages pages
When I go on the page <page>
Then the page has no HTTP error response
Examples:
| page |
| "/resource1" |
| "/resource2" |
...
I currently have 82 pages and that works fine.
However I find this approach is not maintable as there may new resources and resources that will be deleted.
A better approach would be to load the data from the table from somewhere (parsing HTML of an index page, the database etc...).
But I did not figure out how to do that.
I came across an article about table transformation but I could not figure out how to use this transformation in an scenario outline.
Are there any suggestions?
OK since there is some confusion. If you have a look at the example above. All I want to do is change it so that the table is almost empty:
Scenario Outline: Checking all pages pages
When I go on the page <page>
Then the page has no HTTP error response
Examples:
| page |
| "will be generated" |
Then I want to add a transformation that looks something like this:
Transform /^table:page$/ do
all_my_pages.each do |page|
table.hashes << {:page => page}
end
table.hashes
end
I specified the transformation in the same file, but it is not executed, so I was assuming that the transformations don't work with Scenario outlines.
Cucumber is really the wrong tool for that task, you should describe functionality in terms of features. If you want to describe behavior programmatically you should use something like rspec or test-unit.
Also your scenario steps should be descriptive and specialized like a written text and not abstract phrases like used in a programming language. They should not include "incidental details" like the exact url of a ressource or it's id.
Please read http://blog.carbonfive.com/2011/11/07/modern-cucumber-and-rails-no-more-training-wheels/ and watch http://skillsmatter.com/podcast/home/refuctoring-your-cukes
Concerning your question about "inserting into tables", yes it is possible if you
mean adding additional rows to it, infact you could do anything you like with it.
The result of the Transform block completely replaces the original table.
Transform /^table:Name,Posts$/ do
# transform the table into a list of hashes
results = table.hashes.map do |row|
user = User.create! :name => row["Name"]
posts = (1..row["Posts"]).map { |i| Post.create! :title => "Nr #{i}" }
{ :user => user, :posts => posts }
end
# append another hash to the results (e.g. a User "Tim" with 2 Posts)
tim = User.create! :name => "Tim"
tims_posts = [Post.create! :title => "First", Post.create! :title => "Second"]
results << { :user => tim, :posts => tims_posts }
results
end
Given /^I have Posts of the following Users:$/ do |transformation_results|
transformation_results.each do |row|
# assing Posts to the corresponding User
row[:user].posts = row[:posts]
end
end
You could combine this with Scenario Outlines like this:
Scenario Outline: Paginate the post list of an user at 10
Given I have Posts of the following Users:
| Name | Posts |
| Max | 7 |
| Tom | 11 |
When I visit the post list of <name>
Then I should see <count> posts
Examples:
| name | count |
| Max | 7 |
| Tom | 10 |
| Tim | 2 |
This should demonstarte why "adding" rows to a table, might not be best practice.
Please note that it is impossible to expand example tags inside of a table:
Scenario Outline: Paginate the post list of an user at 10
Given I have Posts of the following Users:
| Name | Posts |
| <name> | <existing> | # won't work
When I visit the post list of <name>
Then I should see <displayed> posts
Examples:
| name | existing | displayed |
| Max | 7 | 7 |
| Tom | 11 | 10 |
| Tim | 2 | 2 |
For the specific case of loading data dynamically, here's a suggestion:
A class, let's say PageSets, with methods, e.g. all_pages_in_the_sitemap_errorcount, developing_countries_errorcount.
a step that reads something like
Given I am on the "Check Stuff" page
Then there are 0 errors in the "developing countries" pages
or
Then there are 0 errors in "all pages in the sitemap"
The Then step converts the string "developing countries" into a method name developing_countries_errorcountand attempts to call it on class PageSets. The step expects all _errorcount methods to return an integer in this case. Returning data structures like maps gives you many possibilities for writing succinct dynamic steps.
For more static data we have found YAML very useful for making our tests self-documenting and self-validating, and for helping us remove hard-to-maintain literals like "5382739" that we've all forgotten the meaning of three weeks later.
The YAML format is easy to read and can be commented if necessary (it usually isn't.)
Rather than write:
Given I am logged in as "jackrobinson#gmail.com"
And I select the "History" tab
Then I can see 5 or more "rows of history"
We can write instead:
Given I am logged in as "a user with at least 5 items of history"
When I select the "History" tab
Then I can see 5 or more "rows of history"
In file logins.yaml....
a member with at least 5 items of history:
username: jackrobinson#gmail.com
password: WalRus
We use YAML to hold sets of data relating to all sorts of entities like members, providers, policies, ... the list is growing all the time:
In file test_data.yaml...
a member who has direct debit set up:
username: jackrobinson#gmail.com
password: WalRus
policyId: 5382739
first name: Jack
last name: Robinson
partner's first name: Sally
partner's last name: Fredericks
It's also worth looking at YAML's multi-line text facilities if you need to verify text. Although that's not usual for automation tests, it can sometimes be useful.
I think that the better approach would be using different tool, just for crawling your site and checking if no error is returned. Assuming you're using Rails
The tool you might consider is: Tarantula.
https://github.com/relevance/tarantula
I hope that helps :)
A quick hack is to change the Examples collector code, and using eval of ruby to run your customized ruby function to overwrite the default collected examples data, here is the code:
generate-dynamic-examples-for-cucumber
drawback: need change the scenario_outline.rb file.

Building a reverse language dictionary

I was wondering what does it take to build a reverse language dictionary.
The user enters something along the lines of: "red edible fruit" and the application would return: "tomatoes, strawberries, ..."
I assume these results should be based on some form of keywords such as synonyms, or some form of string search.
This is an online implementation of this concept.
What's going on there and what is involved?
EDIT 1:
The question is more about the "how" rather than the "which tool"; However, feel free to provide the tools you think to do the job.
OpenCyc is a computer-usable database of real-world concepts and meanings. From their web site:
OpenCyc is the open source version of the Cyc technology, the world's largest and most complete general knowledge base and commonsense reasoning engine. OpenCyc can be used as the basis of a wide variety of intelligent applications
Beware though, that it's an enormously complex reasoning engine -- real-world facts never were simple. Documentation is quite sparse and the learning curve is steep.
Any approach would basically involve having a normalized database. Here is a basic example of what your database structure might look like:
// terms
+-------------------+
| id | name |
| 1 | tomatoes |
| 2 | strawberries |
| 3 | peaches |
| 4 | plums |
+-------------------+
// descriptions
+-------------------+
| id | name |
| 1 | red |
| 2 | edible |
| 3 | fruit |
| 4 | purple |
| 5 | orange |
+-------------------+
// connections
+-------------------------+
| terms_id | descript_id |
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 2 | 1 |
| 2 | 2 |
| 2 | 3 |
| 3 | 1 |
| 3 | 2 |
| 3 | 5 |
| 4 | 1 |
| 4 | 2 |
| 4 | 4 |
+-------------------------+
This would be a fairly basic setup, however it should give you an idea how many-to-many relationships using a look-up table work within databases.
Your application would have to break apart strings and be able to handle normalizing the input for example getting rid of suffixes with user input. Then the script would query the connections table and return the results.
To answer the "how" part of your question, you could utilize human computation: There are hordes of bored teenagers with iPhones around the globe, so create a silly game whose byproduct is filling your database with facts -- to harness their brainpower for your purposes.
Sounds like an awkward concept at first, but look at this lecture on Human Computation for an example.
First, there must be some way of associating concepts (like 'snow') with particular words.
So rather than simply storing a wordlist, you would also need to store concepts or properties like "red", "fruit", and "edible" as well as the keywords themselves, and model relationships between them.
At a simple level, you could have two tables (don't have to be database tables): a list of keywords, and a list of concepts/properties/adjectives, then you model the the relationship by storing another table which represents the mapping from keyword to adjective.
So if you have:
keywords:
0001 aardvark
....
0050 strawberry
....
0072 tomato
....
0120 zoo
and concepts:
0001 big
0002 small
0003 fruit
0004 vegetable
0005 mineral
0006 metal
....
0250 black
0251 blue
0252 red
....
0570 edible
you would need a mapping containing:
0050 -> 0003
0050 -> 0252
0050 -> 0570
0072 -> 0003
0072 -> 0252
0072 -> 0570
You may like to think of this as modelling an "is" relationship: 0050 (a strawberry) "is" 0003 (fruit), and "is" 0252 (red), and "is" 0570 (edible).
How will your engine know that
"An incredibly versatile ingredient, essential for any fridge chiller drawer. Whether used for salads, soups, sauces or just raw in sandwiches, make sure they are firm and a rich red colour when purchased",
"mildly acid red or yellow pulpy fruit eaten as a vegetable", and
"an American musician who is known for being the lead singer/drummer for the alternative rock band Sound of Urchin"
all map to the same original word? Natural language definitions are unstructured, you can't store them in a normalized database. You can attempt to structure it by reducing to an ontology, like Princeton's WordNet, but creating and using ontologies is an extremely difficult problem, topic of phd theses and well funded advanced research.
It should be fairly straightforward. You can use straight synonyms in addition to a series of words to define each word. The word order in the definition is sometimes important. Each word can have multiple definitions, of course.
You can develop a rating system to see which definitions are the closest match to the input, then display the top 3 or 4 words.
what about using a dictionary, and performing a full-text search over the definitions (after removing link words and article, like 'and', 'or'...), then returning the word which has the best score (highest number of matching words or maybe a more complicated scoring method) ?
There are several ways you can go about this depending on how much work you want to put into it. One way you can build a reverse dictionary is to use the definitions to help calculate which words are closely related. This way can be the most difficult because you need to have a pretty extensive algorithm that can associate phrases.
Finding Similar Definitions
One way you could do this is by matching the definition string with others and see which ones match the closest. In php you can use the similar_text function. problem with this method is that if your database has a ton of words and definitions then you will use a lot of overhead on your SQL DB.
Use An API
There are several resources out there you can use to help you get a reverse dictionary by using an API. Here are some of them.
https://www.wordgamedictionary.com/api/ Has an API and includes a working reverse
dictionary
http://developer.wordnik.com/docs.html#!/words/reverseDictionary_get_2 Just the API
http://www.onelook.com/reverse-dictionary.shtml Just has a working Reverse Dictionary
This sounds like a job for Prolog.

Resources