What is the opposite of JAXB? i.e. generating XML FROM classes? - user-interface

I am currently designing a solution to a problem I have. I need to dynamically generate an XML file on the fly using Java objects, in the same way JAXB generates Java classes from XML files, however the opposite direction. Is there something out there already like this?
Alternatively, a way in which one could 'save' a state of java classes.
The goal I am working towards is a dynamically changing GUI, where a user can redesign their GUI in the same way you can with iGoogle.

You already have the answer. It's JAXB! You can annotate your classes and then have JAXB marshal them to XML (and back) without the need to create an XML schema first.
Look at https://jaxb.dev.java.net/tutorial/section_6_1-JAXB-Annotations.html#JAXB%20Annotations to get started.

I don't know, if this is exactly what you're looking for, but there's the java.beans.XMLEncoder:
XMLEncoder enc = new XMLEncoder(new FileOutputStream(file));
enc.writeObject(obj);
enc.close();
The result can then be loaded by XMLDecoder:
XMLDecoder dec = new XMLDecoder(new FileInputStream(file));
Object obj = dec.readObject();
dec.close();

"generate xml from java objects:"
try xtream.
Here's what is said on the tin:
No mappings required. Most objects can be serialized without need for specifying mappings.
Requires no modifications to objects.
Full object graph support
For saving java object state:
Serialization is the way to do this in Java

Related

Using Page Objects vs Config Files in Selenium

I've been using Ruby Selenium-Webdriver for one of the automation scripts I'm developing and I'm being asked to use Page Objects, we use page objects a lot however for this application I am using CSV file instead, I have defined all the xpaths that I'm using in my application in a CSV file and I'm parsing that CSV file in my script to refer to those objects, I would like to know is there much of a difference in using a class for defining Page Objects or using a CSV file instead apart from performance concern? I believe using a CSV file will be an addon for us from configuration standpoint and will make it much easier to maintain, any suggestions on this?
Edit - In our use case, we're actually automating applications built on a cloud based tool, so basically all the applications share same design structure from HTML standpoint so we define xpath patterns in CSV and then we pass certain parameters to some custom methods that we've developed to generate xpath's automatically using the CSV instead of finding those manually as its overhead for us because we already know that all the applications will share similar xpath pattern for all elements.
Thanks
I think, POM is better than CSV approach. In POM, you put elements for a page in a separate class file. So, if any change is to make then it's easier to find where to change/maintain. Moreover, it won't get too messy as CSV file and you don't need to use extra utility function to parse those.
There is also a pageobjects gem that provides a set of libraries over and above webdriver/watir, simplifying the code.
Plus, why xpaths? Its one of the last recommended ways to identify an element.
As for the frameork aspect, csv should be more of a maintenance problem than PageObjects. Its the basic difference between text and code. You enforce Object oriented approach on your elements in PageObjects but that is not possible with csv.
In the best case scenario, you have created a column/separate sheets defining which page that element xpath belongs to. That sounds like an overhead. As your application / suite grows there can be thousands of elements. Imagine parsing/ manually updating a csv with that kind of data.
Instead in PageObjects, your elements will be restricted to the Page. Any changes to the app will also specify which elements may get impacted. Now, when define your element as an object in PageObject, rather than css, you also dont need to explicitly create your elements by reading the csv.
It completely depends on the application and the type of test you might perform.
Since it is an automated test script, you do not have to really worry about the performance of the script (it might take few more milli seconds to parse, which should be OK).
Maintaining all the elements identification properties & corresponding actions in a CSV file will make the maintenance easier and make the framework application independent which are nice. But maintaining your framework is bit difficult to make it more robust. Both approaches have its own pros and cons.
Refer to below posts [examples are in java - but you will get the idea]:
Keyword driven framework
Advanced Page Objects
Update:
If you like both, you can comeup with your implementation to easily integrate these too.
#ObjectRepository(src="/login.csv")
public class LoginPage{
private Map<String, WebElement> elements;
public void login(){
elements.get("username").sendKeys('');
elements.get("password").sendKeys('');
elements.get("signin").click();
}
}
Ie, define all the elements in a config file like csv/json etc. Let the page object refer to the class for the page elements. All the methods will be part of the page class.

Yaml properties as a Map in Spring Boot

We have a spring-boot project and are using application.yml files. This works exactly as described in the spring-boot documentation. spring-boot automatically looks in several locations for the files, and obeys any environment overrides we use for the location of those files.
Now we want to also expose those yaml properties as a Map. According to the documentation this can be done with YamlMapFactoryBean. However YamlMapFactoryBean wants me to specify which yaml files to use via the resources property. I want it to use the same yaml files and processing hierarchy that it used when creating properties, so that I can take still take advantage of "magical" features such as placeholder resolution in property values.
I didn't see any documentation on if this was possible.
I was thinking of writing a MapFactoryBean that looked at the environment and simply reversed the "flattening" performed by the YamlProcessor when creating the properties representation of the file.
Any other ideas?
The ConfigFileApplicationContextListener contains the logic for searching for files in various locations. And PropertySourcesLoader loads a file (Resource) into property sources. Neither is really designed for standalone use, but you could easily duplicate them if you want more control. The PropertySourcesLoader delegates to a collection of PropertySourceLoaders so you could add one of the latter that delegates to your YamlMapFactoryBean.
A slightly awkward but workable solution would be to use the existing machinery to collect the YAML on startup. Add a new PropertySourceLoader to your META-INF/spring.factories and let it create new property sources, then post process the Environment to extract the source map(s).
Beware, though: creating a single Map from multiple YAML files, or even a single one with multiple documents (let alone multiple files with multiple documents) isn't as easy as you might think. You have a map-merge problem, and someone is going to have to define the algorithm. The flattening done in YamlMapPropertiesBean and the merge in YamlMapFactoryBean are just two choices out of (probably) a larger set of possibilities.

How do I get CsvDozerBeanWriter to pull column headers from Dozer XML mapping files

I'm writing a feature to produce CSV snapshots of screen data.
I need this to be data-driven. Thus I need to avoid hard-coding each snapshot in Java, but rather load it from a data source such as an XML file or a database. The data is contained in Java beans.
I'm using SuperCSV with the Dozer extension both at 2.1.0.
This combination seems perfect since I can code the mappings from the beans to the columns in Dozer XML mapping files.
This works well for the data, but I have not found a way to specify the strings to use for the CSV's column headers other than to hard-code them in Java as is done in all of the examples and test cases I've looked at. That is not data-driven.
Is there a way for me to code the column headers in the mapper file. Or even to extract them from the mapper file, construct a List and pass them to the writerHeader() method?
I think it would be OK to just use the bean property names as the headers, although ideal situation is that I am provided some additional meta-data notation in the XML's <Field> tag that specifies the header.
I'd have posted this on SourceForge, but I'm getting a 500 error there.
I'm a Super CSV developer. You're the first person I've heard of who's using CsvDozerBeanWriter with their own DozerBeanMapper - great to hear that feature is useful :)
So what's the goal of being 'data driven'? It sounds like you want your code to be really generic, so you can alter the CSV just by changing the XML. Is that right? Of course, you can't configure the cell processors dynamically...or are you trying to do that too!!??
I'd take a look at the MappingMetadata API of Dozer, which you can access by calling getMappingMetadata() on the DozerBeanMapper. I've never used it, but it looks like you could derive the column names this way (though you'd probably be limited to the field names).
Otherwise, you'll have to parse the XML file yourself (I'd probably use XPath). You'd have to do it this way if you want to use some other metadata in the XML for the column name.

Is there XML binding library for Ruby (like JAXB)?

is there any tool for Ruby which can transform XML (SOAP) to objects and vice versa? And if possible, generate all the objects (models) from XML schema (XSD). I worked several times with JAXB tool (in Java) and I need something simmilar:
generate models from XML schema
easily create component for serializing and deserializing them
easily create component for storing the objects to database
if possible, generate database tables according to that schema
Do you know any tool for this? What approach would you recommend to complete such task?
Thanks for your answers.
Savon should cover SOAP part of it.
I haven't used it but there is a library called HappyMapper: http://happymapper.rubyforge.org/

NSCoder vs NSDictionary, when do you use what?

I'm trying to figure out how to decide when to use NSDictionary or NSCoder/NSCoding?
It seems that for general property lists and such that NSDictionary is the easy way to go that generates XML files that are easily editable outside of the application.
When dealing with custom classes that holds data or possibly other custom classes nested inside, it seems like NSCoder/NSCoding would be the better route since it will step through all the contained object classes and encode them as well when an archive command is used.
NSDictionary seems like it would take more work to get all the properties or data characteristics to a single level to be able to save it, where as NSCoder/NSCoding would automatically encode nested custom classes that implement the NSCoding interface.
Outside of it being binary data and not editable outside of your application is there a real reason to use one over the other? And along those lines is there an indicator of which way you should lean between the two? Am I missing something obvious?
Apple's documentation on object graphs has this to say:
Mac OS X serializations store a simple hierarchy of value objects, such as dictionaries, arrays, strings, and binary data. The serialization only preserves the values of the objects and their position in the hierarchy. Multiple references to the same value object might result in multiple objects when deserialized. The mutability of the objects is not maintained.
…
Mac OS X archives store an arbitrarily complex object graph. The archive preserves the identity of every object in the graph and all the relationships it has with all the other objects in the graph. When unarchived, the rebuilt object graph should, with few exceptions, be an exact copy of the original object graph.
The way I interpret this is that, if you want to store simple values, serialization (using an NSDictionary, for example) is a fine way to go. If you want to store an object graph of arbitrary types, with uniqueness and mutability preserved, using archives (with NSCoder, for example) is your best bet.
You may also want to read Apple's Archives and Serializations Programming Guide for Cocoa, of which the aforelinked page on object graphs is a part, as it covers this topic well.
I am NOT a big fan of using NSCoding/NSCoder/NSArchiver (we need to pick a name!) to serialise an object graph to a file.
Archives created in this way are incredibly fragile. If you save an object of class Foo then by golly you need to make sure when you load the data back in you have a class Foo in your application.
This makes NSCoder based serialisation difficult from the perspective of sharing files with other applications or even forwards compatibility with your future application.
I forgot to list what I would recommend.
NSCoding can be ok in certain situations: if you're just doing something quick and simple (although you do have to write a lot of code - two methods per class to be serialised). It can also be ok if you're not worried about compatibility with other applications.
Export/import via property lists (perhaps using the NSPropertyListSerializaion class) is a fine solution. XML based plists are easy to create and edit. Main advantage to plists is that you're not tying the file format to just your application.
You can also create your own XML based file format and read/write to it using NSXMLDocument API and friends. This really isn't much more work than using property lists.
I think you're a bit confused, NSDictionary is a data structure, it also happens to implement the NSCoding protocol. So in essence, you could either put all your data into a NSDictionary and have that encode itself later on, or you can implement the NSCoding protocol and encode your object tree using the NSCoder API. Based on the type of NSCoder object passed in to the encodeWithCoder: method, is the output of your encoding.

Resources