I am working with C# in a .NET Core MVC web application and I would like to keep a persistent list of key/value pairs between the controller and views throughout multiple calls back to the controller accumulating the pairs as I go along. Is there a reasonably simple way to do this? I have tried looking into using the TempData[] technique, but this seems to become excessively complex when working with a list or collection of paired data.
I'd appreciate some ideas.
Thanks.
I would like to keep a persistent list of key/value pairs between the controller and views throughout multiple calls back to the controller accumulating the pairs as I go along.
I have tried looking into using the TempData[] technique, but this seems to become excessively complex when working with a list or collection of paired data.
Normally we store simple values (such as strings, numeric etc) in TempData. To store complex types using TempData, we could serialize it to a string-based format first.
If you do not want to do additional serialization with using TempData, to persistent list of key/value pairs data across requests, you can try to store these data in session.
For more information about state management in ASP.NET Core, you can check this official doc:
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/app-state?view=aspnetcore-5.0
I've been using Ruby Selenium-Webdriver for one of the automation scripts I'm developing and I'm being asked to use Page Objects, we use page objects a lot however for this application I am using CSV file instead, I have defined all the xpaths that I'm using in my application in a CSV file and I'm parsing that CSV file in my script to refer to those objects, I would like to know is there much of a difference in using a class for defining Page Objects or using a CSV file instead apart from performance concern? I believe using a CSV file will be an addon for us from configuration standpoint and will make it much easier to maintain, any suggestions on this?
Edit - In our use case, we're actually automating applications built on a cloud based tool, so basically all the applications share same design structure from HTML standpoint so we define xpath patterns in CSV and then we pass certain parameters to some custom methods that we've developed to generate xpath's automatically using the CSV instead of finding those manually as its overhead for us because we already know that all the applications will share similar xpath pattern for all elements.
Thanks
I think, POM is better than CSV approach. In POM, you put elements for a page in a separate class file. So, if any change is to make then it's easier to find where to change/maintain. Moreover, it won't get too messy as CSV file and you don't need to use extra utility function to parse those.
There is also a pageobjects gem that provides a set of libraries over and above webdriver/watir, simplifying the code.
Plus, why xpaths? Its one of the last recommended ways to identify an element.
As for the frameork aspect, csv should be more of a maintenance problem than PageObjects. Its the basic difference between text and code. You enforce Object oriented approach on your elements in PageObjects but that is not possible with csv.
In the best case scenario, you have created a column/separate sheets defining which page that element xpath belongs to. That sounds like an overhead. As your application / suite grows there can be thousands of elements. Imagine parsing/ manually updating a csv with that kind of data.
Instead in PageObjects, your elements will be restricted to the Page. Any changes to the app will also specify which elements may get impacted. Now, when define your element as an object in PageObject, rather than css, you also dont need to explicitly create your elements by reading the csv.
It completely depends on the application and the type of test you might perform.
Since it is an automated test script, you do not have to really worry about the performance of the script (it might take few more milli seconds to parse, which should be OK).
Maintaining all the elements identification properties & corresponding actions in a CSV file will make the maintenance easier and make the framework application independent which are nice. But maintaining your framework is bit difficult to make it more robust. Both approaches have its own pros and cons.
Refer to below posts [examples are in java - but you will get the idea]:
Keyword driven framework
Advanced Page Objects
Update:
If you like both, you can comeup with your implementation to easily integrate these too.
#ObjectRepository(src="/login.csv")
public class LoginPage{
private Map<String, WebElement> elements;
public void login(){
elements.get("username").sendKeys('');
elements.get("password").sendKeys('');
elements.get("signin").click();
}
}
Ie, define all the elements in a config file like csv/json etc. Let the page object refer to the class for the page elements. All the methods will be part of the page class.
I think this should be an easy one, but haven't found any clear answer, on what would the best practice be.
In an application, we keep current status of an order (open, canceled, shipped, closed ...).
This variables cannot change without code change, but application should meet the following criteria:
status names should be easily displayed in different languages,
application can search via freetext status names (like googling for "open")
status_id should be available to developer via enum
zero headache when adding new statuses
Possible ways we have tackled this so far:
having DB table status with PK(id, language_id) and a separate enum which represents this statuses in an application.
PROS: 1.,2.,3. work out of the box, CONS: 4. needs to run update script on every client installation, SQL selects can become large and cumbersome, when dealing with a lot of code tables
having just enum:
PROS: 3.,4. CONS: 1.,2. is a total nightmare
having enums, which populate database tables on each start of an application:
PROS: 1.,2.,3.,4. work CONS: some overhead on application start, SQL select can become large and cumbersome, when dealing a lot code tables.
What is the most common way of tackling this problem?
Sounds like you summarized it pretty good yourself, and comparing the pros/cons points towards #3. Just one comment when you implement #3 though:
Use a caching mechanism (even a simple HashMap!) plus adding the option to refresh the cache - will ease your work when you'll want to change values (without the need to restart every time!).
I would, and do, use method 3 because it is the best of the lot. You can use resource files to store the translations in and map the enum values to keys in the resource files. Your database can contain the id of the enum for the status.
1.status names should be easily displayed in different languages,
2.application can search via freetext status names (like googling for "open")
These are interfaces layer's concern, you'd better not mix them in you domain model.
I would setup a mapping between status enum and i18n codes. the mapping could be stored in a file (cached in memory) or hardcoded.
for example: if you use dto or view adatper to render your ui.
public class OrderDetailViewAdapter {
private Order order;
public String getStatus() {
return i18nMapper.to(order.getStatus());//use hardcoded switch case or file impl
}
}
Or you could done this before you populating you dtos.
You could use a similar solution for goal2. When user types text, find corresponding enum from mapping and use enum for search.
Anyway, use db tables the less the better.
Personally, I always use dedicated enum class inside domain. Only responsibility of this class is holding status name (OPEN, CANCELED, SHIPPED, ...). Status name is not visible outside codebase. Also, status could be also stored inside database field as string (varchar or similar).
For the purpose of rendering, depending of number of use cases, sometimes I implement formatting inside formatter (e.g. OrderFormatter::formatStatusName(), OrderFormatter::formatAbbreviatedStatusName(), ...). If formatting is needed often I create dedicated class with all formatting styles needed (OrderStatusFormatter::short(), OrderStatusFormatter::abbriviated()...). Of course, internal mapping is needed to map status name to status title, and this is tricky part. But if you want layering you can't avoid mapping.
Translation is not dealt so far. I translate strings inside templates so formatters are clean of that responsibility. To summarize:
enum inside domain model
formatter inside presentation layer
translation inside template
There is no need to create special table for order status translations. Better choice would be to implement generic translation mechanism, seperated from your business code.
I'm building a report that needs to include an 'estimate' column, which is based on data that's not available in the dataset.
Ideally I'd like to be able to define a Java interface
public int getEstimate(int foo_id, int bar_id, int quantity);
where foo_id, bar_id and quantity are available in the row I want the estimate presented.
There will be multiple strategies for producing the estimate so it would be good to use an interface to allow swapping them when needed.
Looking at the BIRT docs, I think it's possible I ought to be using the event handler mechanisms, but that seems to only allow defining a class to use and I'd somehow like to inject a configured estimator.
A non-obfuscated example might be to say that I have a dataset which includes an IP address column, and I'd like to be able to use some GeoIP service to resolve the country from the IP address. In that case I'd have an interface public String getCountryName(String address) and the actual implementations may use MaxMind, a local cache or some other system.
How would I go about doing this?
Or.. would I be better off by writing a scripted data source that can integrate the computed data before delivering it to BIRT?
Or.. some sort of scripted data source that is then used to create a join data set?
I think a Scripted Data Source would work fine, but a Java-based event handler would be more straightforward. You can implement it as a simple POJO and get access to any and all the complex objects and tools that will allow you to calculate your estimate. The simplest solution of all may simply to be adding a calculated field to the data set.
When creating the calculated field, you can get pretty complex in terms of the scripting logic you can leverage in order to produce the resultant value. The nicest thing about this route is that all the other column values in the row (which I assume you need to calculate the estimate) are made available via the Expression editor. You can pull in complex objects (POJOs) to help in your calculations here as well by using the "Packages" object (i.e. var red = new Packages.redwood.HelloWorld())
If you want to create the Event Handler class, here is what I would do. I would create a text object and bind the onCreate even to your POJO (by extending the TextItemEventAdapter) and override the "onCreate" method. There you can do any work you want to and at the end simply call 'text.setText(theEstimateResult);' to make the estimate itself visible. As far as accessing data values to do your calculations, You can get to those in the POJO too. I assume the estimate will be a part of a larger table of values. You can access any specific row value via the reportContext.
Those are the two ideas I would give a try first. The computed column is the fastest to implement and the least likely to throw you a curve during deployment. Let me know which way you choose and we can hash it out further if needed.
Essentially, I have to get a flat file into a database. The flat files come in with the first two characters on each line indicating which type of record it is.
Do I create a class for each record type with properties matching the fields in the record? Should I just use arrays?
I want to load the data into some sort of data structure before saving it in the database so that I can use unit tests to verify that the data was loaded correctly.
Here's a sample of what I have to work with (BAI2 bank statements):
01,121000358,CLIENT,050312,0213,1,80,1,2/
02,CLIENT-STANDARD,BOFAGB22,1,050311,2359,,/
03,600812345678,GBP,fab1,111319005,,V,050314,0000/
88,fab2,113781251,,V,050315,0000,fab3,113781251,,V,050316,0000/
88,fab4,113781251,,V,050317,0000,fab5,113781251,,V,050318,0000/
88,010,0,,,015,0,,,045,0,,,100,302982205,,,400,302982205,,/
16,169,57626223,V,050311,0000,102 0101857345,/
88,LLOYDS TSB BANK PL 779300 99129797
88,TRF/REF 6008ABS12300015439
88,102 0101857345 K BANK GIRO CREDIT
88,/IVD-11 MAR
49,1778372829,90/
98,1778372839,1,91/
99,1778372839,1,92
I'd recommend creating classes (or structs, or what-ever value type your language supports), as
record.ClientReference
is so much more descriptive than
record[0]
and, if you're using the (wonderful!) FileHelpers Library, then your terms are pretty much dictated for you.
Validation logic usually has at least 2 levels, the grosser level being "well-formatted" and the finer level being "correct data".
There are a few separate problems here. One issue is that of simply verifying the data, or writing tests to make sure that your parsing is accurate. A simple way to do this is to parse into a class that accepts a given range of values, and throws the appropriate error if not,
e.g.
public void setField1(int i)
{
if (i>100) throw new InvalidDataException...
}
Creating different classes for each record type is something you might want to do if the parsing logic is significantly different for different codes, so you don't have conditional logic like
public void setField2(String s)
{
if (field1==88 && s.equals ...
else if (field2==22 && s
}
yechh.
When I have had to load this kind of data in the past, I have put it all into a work table with the first two characters in one field and the rest in another. Then I have parsed it out to the appropriate other work tables based on the first two characters. Then I have done any cleanup and validation before inserting the data from the second set of work tables into the database.
In SQL Server you can do this through a DTS (2000) or an SSIS package and using SSIS , you may be able to process the data onthe fly with storing in work tables first, but the prcess is smilar, use the first two characters to determine the data flow branch to use, then parse the rest of the record into some type of holding mechanism and then clean up and validate before inserting. I'm sure other databases also have some type of mechanism for importing data and would use a simliar process.
I agree that if your data format has any sort of complexity you should create a set of custom classes to parse and hold the data, perform validation, and do any other appropriate model tasks (for instance, return a human readable description, although some would argue this would be better to put into a separate view class). This would probably be a good situation to use inheritance, where you have a parent class (possibly abstract) define the properties and methods common to all types of records, and each child class can override these methods to provide their own parsing and validation if necessary, or add their own properties and methods.
Creating a class for each type of row would be a better solution than using Arrays.
That said, however, in the past I have used Arraylists of Hashtables to accomplish the same thing. Each item in the arraylist is a row, and each entry in the hashtable is a key/value pair representing column name and cell value.
Why not start by designing the database that will hold the data then you can use the entity framwork to generate the classes for you.
here's a wacky idea:
if you were working in Perl, you could use DBD::CSV to read data from your flat file, provided you gave it the correct values for separator and EOL characters. you'd then read rows from the flat file by means of SQL statements; DBI will make them into standard Perl data structures for you, and you can run whatever validation logic you like. once each row passes all the validation tests, you'd be able to write it into the destination database using DBD::whatever.
-steve