How to filter a TreeGrid? - filter

I currently have a TreeGrid which shows nodes with names. The data is coming from a manually populated DataSource.
When setting the filter on the nodeName field, The filter is not done recursevily and thus I can only filter the Root node.
How can I tell the filter to search for matches in child nodes?
PS: in the code below, I have 3 nodes Root > Run > Child1. If i try the filter and type "R", I get Root and Run. But if i Type "C", I get "no results found"
Code
DataSource:
package murex.otk.gwt.gui.client.ui.datasource;
import java.util.List;
import murex.otk.gwt.gui.client.ui.record.TreeRecord;
import com.smartgwt.client.data.DataSource;
import com.smartgwt.client.data.fields.DataSourceIntegerField;
import com.smartgwt.client.data.fields.DataSourceTextField;
public class ClassesDataSource extends DataSource {
private static ClassesDataSource instance = null;
public static ClassesDataSource getInstance() {
if (instance == null) {
instance = new ClassesDataSource("classesDataSource");
}
return instance;
}
private ClassesDataSource(String id) {
setID(id);
DataSourceTextField nodeNameField = new DataSourceTextField("nodeName");
nodeNameField.setCanFilter(true);
nodeNameField.setRequired(true);
DataSourceIntegerField nodeIdField = new DataSourceIntegerField("nodeId");
nodeIdField.setPrimaryKey(true);
nodeIdField.setRequired(true);
DataSourceIntegerField nodeParentField = new DataSourceIntegerField("nodeParent");
nodeParentField.setRequired(true);
nodeParentField.setForeignKey(id + ".nodeId");
nodeParentField.setRootValue(0);
setFields(nodeIdField, nodeNameField, nodeParentField);
setClientOnly(true);
}
public void populateDataSource(List<String> classNames) {
TreeRecord root = new TreeRecord("Root", 0);
addData(root);
TreeRecord child1 = new TreeRecord("Child1", root.getNodeId());
addData(child1);
TreeRecord child2 = new TreeRecord("Run", child1.getNodeId());
addData(child2);
}
}
Main
public void onModuleLoad() {
ClassesDataSource.getInstance().populateDataSource(new ArrayList<String>());
final TreeGrid employeeTree = new TreeGrid();
employeeTree.setHeight(350);
employeeTree.setDataSource(ClassesDataSource.getInstance());
employeeTree.setAutoFetchData(true);
TreeGridField field = new TreeGridField("nodeName");
field.setCanFilter(true);
employeeTree.setFields(field);
employeeTree.setShowFilterEditor(true);
employeeTree.setAutoFetchAsFilter(true);
employeeTree.setFilterOnKeypress(true);
employeeTree.draw();
}

I solved this.
The problem was that the filter was calling the server to fetch data whereas my datasource was set to Client Only. To fix this, the employeeTree must have employeeTree.setLoadDataOnDemand(false);

You may also use employeeTree.setKeepParentsOnFilter(true)
http://www.smartclient.com/smartgwtee/javadoc/com/smartgwt/client/widgets/tree/TreeGrid.html#setKeepParentsOnFilter(java.lang.Boolean)

Related

Entity Framework Core CompileAsyncQuery syntax to do a query returning a list?

Documentation and examples online about compiled async queries are kinda sparse, so I might as well ask for guidance here.
Let's say I have a repository pattern method like this to query all entries in a table:
public async Task<List<ProgramSchedule>> GetAllProgramsScheduledList()
{
using (var context = new MyDataContext(_dbOptions))
{
context.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
return await context.ProgramsScheduledLists.ToListAsync();
}
}
This works fine.
Now I want to do the same, but with an async compiled query.
One way I managed to get it to compile is with this syntax:
static readonly Func<MyDataContext, Task<List<ProgramSchedule>>> GetAllProgramsScheduledListQuery;
static ProgramsScheduledListRepository()
{
GetAllProgramsScheduledListQuery = EF.CompileAsyncQuery<MyDataContext, List<ProgramSchedule>>(t => t.ProgramsScheduledLists.ToList());
}
public async Task<List<ProgramSchedule>> GetAllProgramsScheduledList()
{
using (var context = new MyDataContext(_dbOptions))
{
context.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
return await GetAllProgramsScheduledListQuery(context);
}
}
But then on runtime this exception get thrown:
System.ArgumentException: Expression of type 'System.Collections.Generic.List`1[Model.Scheduling.ProgramSchedule]' cannot be used for return type 'System.Threading.Tasks.Task`1[System.Collections.Generic.List`1[Model.Scheduling.ProgramSchedule]]'
The weird part is that if I use any other operator (for example SingleOrDefault), it works fine. It only have problem returning List.
Why?
EF.CompileAsync for set of records, returns IAsyncEnumrable<T>. To get List from such query you have to enumerate IAsyncEnumrable and fill List,
private static Func<MyDataContext, IAsyncEnumerable<ProgramSchedule>> compiledQuery =
EF.CompileAsyncQuery((MyDataContext ctx) =>
ctx.ProgramsScheduledLists);
public static async Task<List<ProgramSchedule>> GetAllProgramsScheduledList(CancellationToken ct = default)
{
using (var context = new MyDataContext(_dbOptions))
{
context.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
var result = new List<ProgramSchedule>();
await foreach (var s in compiledQuery(context).WithCancellation(ct))
{
result.Add(s);
}
return result;
}
}

How to test a state stored aggregate that doesn't produce events

I want to test a state stored aggregate by using AggregateTestFixture. However I get AggregateNotFoundException: No 'given' events were configured for this aggregate, nor have any events been stored. error.
I change the state of the aggregate in command handlers and apply no events since I don't want my domain entry table to grow unnecessarily.
Here is my external command handler for the aggregate;
open class AllocationCommandHandler constructor(
private val repository: Repository<Allocation>,
) {
#CommandHandler
fun on(cmd: CreateAllocation) {
this.repository.newInstance {
Allocation(
cmd.allocationId
)
}
}
#CommandHandler
fun on(cmd: CompleteAllocation) {
this.load(cmd.allocationId).invoke { it.complete() }
}
private fun load(allocationId: AllocationId): Aggregate<Allocation> =
repository.load(allocationId)
}
Here is the aggregate;
#Entity
#Aggregate
#Revision("1.0")
final class Allocation constructor() {
#AggregateIdentifier
#Id
lateinit var allocationId: AllocationId
private set
var status: AllocationStatusEnum = AllocationStatusEnum.IN_PROGRESS
private set
constructor(
allocationId: AllocationId,
) : this() {
this.allocationId = allocationId
this.status = AllocationStatusEnum.IN_PROGRESS
}
fun complete() {
if (this.status != AllocationStatusEnum.IN_PROGRESS) {
throw IllegalArgumentException("cannot complete if not in progress")
}
this.status = AllocationStatusEnum.COMPLETED
apply(
AllocationCompleted(
this.allocationId
)
)
}
}
There is no event handler for AllocationCompleted event in this aggregate, since it is listened by an other aggregate.
So here is the test code;
class AllocationTest {
private lateinit var fixture: AggregateTestFixture<Allocation>
#Before
fun setUp() {
fixture = AggregateTestFixture(Allocation::class.java).apply {
registerAnnotatedCommandHandler(AllocationCommandHandler(repository))
}
}
#Test
fun `create allocation`() {
fixture.givenNoPriorActivity()
.`when`(CreateAllocation("1")
.expectSuccessfulHandlerExecution()
.expectState {
assertTrue(it.allocationId == "1")
};
}
#Test
fun `complete allocation`() {
fixture.givenState { Allocation("1"}
.`when`(CompleteAllocation("1"))
.expectSuccessfulHandlerExecution()
.expectState {
assertTrue(it.status == AllocationStatusEnum.COMPLETED)
};
}
}
create allocation tests passes, I get the error on complete allocation test.
The givenNoPriorActivity is actually not intended to be used with State-Stored aggregates. Quite recently an adjustment has been made to the AggregateTestFixture to support this, but that will be released with Axon 4.6.0 (the current most recent version is 4.5.1).
That however does not change the fact I find it odd the complete allocation test fails. Using the givenState and expectState methods is the way to go. Maybe the Kotlin/Java combination is acting up right now; have you tried doing the same with pure Java, just for certainty?
On any note, the exception you share comes from the RecordingEventStore inside the AggregateTestFixture. It should only occur if an Event Sourcing Repository is used under the hood (by the fixture) actually since that will read events. What might be the culprit, is the usage of the givenNoPriorActivity. Please try to replace that for a givenState() providing an empty Aggregate instance.

spring cloud stream file source app - History of Processed files and polling files under sub directory

I'm building a data pipeline with Spring Cloud Stream File Source app at the start of the pipeline. I need some help with working around some missing features
My File source app (based on org.springframework.cloud.stream.app:spring-cloud-starter-stream-source-file) works perfectly well excepting missing features that I need help with. I need
To delete files after polled and messaged
Poll into the subdirectories
With respect to item 1, I read that the delete feature doesn't exist in the file source app (it is available on sftp source). Every time the app is restarted, the files that were processed in the past will be re-picked, can the history of files processed made permanent? Is there an easy alternative?
To support those requirements you definitely need to modify the code of the mentioned File Source project: https://docs.spring.io/spring-cloud-stream-app-starters/docs/Einstein.BUILD-SNAPSHOT/reference/htmlsingle/#_patching_pre_built_applications
I would suggest to fork the project and poll it from GitHub as is, since you are going to modify existing code of the project. Then you follow instruction in the mentioned doc how to build the target binder-specific artifact which will be compatible with SCDF environment.
Now about the questions:
To poll sub-directories for the same file pattern, you need to configure a RecursiveDirectoryScanner on the Files.inboundAdapter():
/**
* Specify a custom scanner.
* #param scanner the scanner.
* #return the spec.
* #see FileReadingMessageSource#setScanner(DirectoryScanner)
*/
public FileInboundChannelAdapterSpec scanner(DirectoryScanner scanner) {
Note that all the filters must be configured on this DirectoryScanner instead.
There is going to be a warning otherwise:
// Check that the filter and locker options are _NOT_ set if an external scanner has been set.
// The external scanner is responsible for the filter and locker options in that case.
Assert.state(!(this.scannerExplicitlySet && (this.filter != null || this.locker != null)),
() -> "When using an external scanner the 'filter' and 'locker' options should not be used. " +
"Instead, set these options on the external DirectoryScanner: " + this.scanner);
To keep track of the files, it is better to consider to have a FileSystemPersistentAcceptOnceFileListFilter based on the external persistence store for the ConcurrentMetadataStore implementation: https://docs.spring.io/spring-integration/reference/html/#metadata-store. This must be used instead of that preventDuplicates(), because FileSystemPersistentAcceptOnceFileListFilter ensure only once logic for us as well.
Deleting file after sending might not be a case, since you may just send File as is and it is has to be available on the other side.
Also, you can add a ChannelInterceptor into the source.output() and implement its postSend() to perform ((File) message.getPayload()).delete(), which is going to happen when the message has been successfully sent to the binder destination.
#EnableBinding(Source.class)
#Import(TriggerConfiguration.class)
#EnableConfigurationProperties({FileSourceProperties.class, FileConsumerProperties.class,
TriggerPropertiesMaxMessagesDefaultUnlimited.class})
public class FileSourceConfiguration {
#Autowired
#Qualifier("defaultPoller")
PollerMetadata defaultPoller;
#Autowired
Source source;
#Autowired
private FileSourceProperties properties;
#Autowired
private FileConsumerProperties fileConsumerProperties;
private Boolean alwaysAcceptDirectories = false;
private Boolean deletePostSend;
private Boolean movePostSend;
private String movePostSendSuffix;
#Bean
public IntegrationFlow fileSourceFlow() {
FileInboundChannelAdapterSpec messageSourceSpec = Files.inboundAdapter(new File(this.properties.getDirectory()));
RecursiveDirectoryScanner recursiveDirectoryScanner = new RecursiveDirectoryScanner();
messageSourceSpec.scanner(recursiveDirectoryScanner);
FileVisitOption[] fileVisitOption = new FileVisitOption[1];
recursiveDirectoryScanner.setFilter(initializeFileListFilter());
initializePostSendAction();
IntegrationFlowBuilder flowBuilder = IntegrationFlows
.from(messageSourceSpec,
new Consumer<SourcePollingChannelAdapterSpec>() {
#Override
public void accept(SourcePollingChannelAdapterSpec sourcePollingChannelAdapterSpec) {
sourcePollingChannelAdapterSpec
.poller(defaultPoller);
}
});
ChannelInterceptor channelInterceptor = new ChannelInterceptor() {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
if (sent) {
File fileOriginalFile = (File) message.getHeaders().get("file_originalFile");
if (fileOriginalFile != null) {
if (movePostSend) {
fileOriginalFile.renameTo(new File(fileOriginalFile + movePostSendSuffix));
} else if (deletePostSend) {
fileOriginalFile.delete();
}
}
}
}
//Override more interceptor methods to capture some logs here
};
MessageChannel messageChannel = source.output();
((DirectChannel) messageChannel).addInterceptor(channelInterceptor);
return FileUtils.enhanceFlowForReadingMode(flowBuilder, this.fileConsumerProperties)
.channel(messageChannel)
.get();
}
private void initializePostSendAction() {
deletePostSend = this.properties.isDeletePostSend();
movePostSend = this.properties.isMovePostSend();
movePostSendSuffix = this.properties.getMovePostSendSuffix();
if (deletePostSend && movePostSend) {
String errorMessage = "The 'delete-file-post-send' and 'move-file-post-send' attributes are mutually exclusive";
throw new IllegalArgumentException(errorMessage);
}
if (movePostSend && (movePostSendSuffix == null || movePostSendSuffix.trim().length() == 0)) {
String errorMessage = "The 'move-post-send-suffix' is required when 'move-file-post-send' is set to true.";
throw new IllegalArgumentException(errorMessage);
}
//Add additional validation to ensure the user didn't configure a file move that will result in cyclic processing of file
}
private FileListFilter<File> initializeFileListFilter() {
final List<FileListFilter<File>> filtersNeeded = new ArrayList<FileListFilter<File>>();
if (this.properties.getFilenamePattern() != null && this.properties.getFilenameRegex() != null) {
String errorMessage = "The 'filename-pattern' and 'filename-regex' attributes are mutually exclusive.";
throw new IllegalArgumentException(errorMessage);
}
if (StringUtils.hasText(this.properties.getFilenamePattern())) {
SimplePatternFileListFilter patternFilter = new SimplePatternFileListFilter(this.properties.getFilenamePattern());
if (this.alwaysAcceptDirectories != null) {
patternFilter.setAlwaysAcceptDirectories(this.alwaysAcceptDirectories);
}
filtersNeeded.add(patternFilter);
} else if (this.properties.getFilenameRegex() != null) {
RegexPatternFileListFilter regexFilter = new RegexPatternFileListFilter(this.properties.getFilenameRegex());
if (this.alwaysAcceptDirectories != null) {
regexFilter.setAlwaysAcceptDirectories(this.alwaysAcceptDirectories);
}
filtersNeeded.add(regexFilter);
}
FileListFilter<File> createdFilter = null;
if (!Boolean.FALSE.equals(this.properties.isIgnoreHiddenFiles())) {
filtersNeeded.add(new IgnoreHiddenFileListFilter());
}
if (Boolean.TRUE.equals(this.properties.isPreventDuplicates())) {
filtersNeeded.add(new AcceptOnceFileListFilter<File>());
}
if (filtersNeeded.size() == 1) {
createdFilter = filtersNeeded.get(0);
} else {
createdFilter = new CompositeFileListFilter<File>(filtersNeeded);
}
return createdFilter;
}
}

Hapi Fhir DomainResource, What URL do you use?

http://hapifhir.io/doc_custom_structures.html
this article discusses a DomainResource.
There are situations however when you might want to create an entirely
custom resource type. This feature should be used only if there is no
other option, since it means you are creating a resource type that
will not be interoperable with other FHIR implementations.
I've implemented the code verbatum. (I show the classes below (with no "guts" just for brevity) (full code at the url))
/**
* This is an example of a custom resource that also uses a custom
* datatype.
*
* Note that we are extensing DomainResource for an STU3
* resource. For DSTU2 it would be BaseResource.
*/
#ResourceDef(name = "CustomResource", profile = "http://hl7.org/fhir/profiles/custom-resource")
public class CustomResource extends DomainResource {
}
and
/**
* This is an example of a custom datatype.
*
* This is an STU3 example so it extends Type and implements ICompositeType. For
* DSTU2 it would extend BaseIdentifiableElement and implement ICompositeDatatype.
*/
#DatatypeDef(name="CustomDatatype")
public class CustomDatatype extends Type implements ICompositeType {
}
And I've "registered it" in my code base:
if (null != this.fhirContext)
{
this.fhirContext.registerCustomType(CustomResource.class);
this.fhirContext.registerCustomType(CustomDatatype.class);
}
(~trying to follow the instructions from the URL above)
// Create a context. Note that we declare the custom types we'll be using
// on the context before actually using them
FhirContext ctx = FhirContext.forDstu3();
ctx.registerCustomType(CustomResource.class);
ctx.registerCustomType(CustomDatatype.class);
// Now let's create an instance of our custom resource type
// and populate it with some data
CustomResource res = new CustomResource();
// Add some values, including our custom datatype
DateType value0 = new DateType("2015-01-01");
res.getTelevision().add(value0);
CustomDatatype value1 = new CustomDatatype();
value1.setDate(new DateTimeType(new Date()));
value1.setKittens(new StringType("FOO"));
res.getTelevision().add(value1);
res.setDogs(new StringType("Some Dogs"));
// Now let's serialize our instance
String output = ctx.newXmlParser().setPrettyPrint(true).encodeResourceToString(res);
System.out.println(output);
But that looks like a console-app usage of two objects...not how to wire it into the fhir-server.
I've been trying for 3 hours now to figure out what URL to use.
some things I've tried:
http://127.0.0.1:8080/fhir/CustomResource
http://127.0.0.1:8080/fhir/profiles/custom-resource
http://127.0.0.1:8080/fhir/custom-resource
to no avail...
What is the URL?
And how do I populate the values for it?
Ok.
So the CustomResource still needs its own IResourceProvider. (thanks to daniels in the comments of the original question)
Here is a basic working example.
You'll do everything I listed in the original question AND you'll make and register an IResourceProvider for the new customresource.
new IResourceProvider
package mystuff.resourceproviders;
import org.hl7.fhir.dstu3.model.DateType;
import org.hl7.fhir.dstu3.model.IdType;
import ca.uhn.fhir.rest.annotation.IdParam;
import ca.uhn.fhir.rest.annotation.Read;
import ca.uhn.fhir.rest.server.IResourceProvider;
import mystuff.CustomDatatype;
import mystuff.CustomResource;
import org.hl7.fhir.dstu3.model.DateTimeType;
import org.hl7.fhir.dstu3.model.StringType;
import org.hl7.fhir.instance.model.api.IBaseResource;
import java.util.Date;
public class CustomResourceProvider implements IResourceProvider {
#Override
public Class<? extends IBaseResource> getResourceType() {
return CustomResource.class;
}
/* the IdType (datatype) will be different based on STU2 or STU3. STU3 version below */
#Read()
public CustomResource getResourceById(#IdParam IdType theId) {
// Now let's create an instance of our custom resource type
// and populate it with some data
CustomResource res = new CustomResource();
res.setId(theId);
// Add some values, including our custom datatype
DateType value0 = new DateType("2015-01-01");
res.getTelevision().add(value0);
CustomDatatype value1 = new CustomDatatype();
value1.setDate(new DateTimeType(new Date()));
value1.setKittens(new StringType("FOO"));
res.getTelevision().add(value1);
res.setDogs(new StringType("Some Dogs"));
return res;
}
}
then you'll register this (as documented here):
http://hapifhir.io/doc_rest_server.html#_toc_create_a_server
instead of this:
#Override
protected void initialize() throws ServletException {
/*
* The servlet defines any number of resource providers, and
* configures itself to use them by calling
* setResourceProviders()
*/
List<IResourceProvider> resourceProviders = new ArrayList<IResourceProvider>();
resourceProviders.add(new RestfulPatientResourceProvider());
resourceProviders.add(new RestfulObservationResourceProvider());
setResourceProviders(resourceProviders);
}
you'll have something like this
#Override
protected void initialize() throws ServletException {
/*
* The servlet defines any number of resource providers, and
* configures itself to use them by calling
* setResourceProviders()
*/
List<IResourceProvider> resourceProviders = new ArrayList<IResourceProvider>();
resourceProviders.add(new CustomResourceProvider());
setResourceProviders(resourceProviders);
}
URL for testing this (most probable local development url that is)
http://127.0.0.1:8080/fhir/CustomResource/12345
and you'll get back this JSON response.
{
"resourceType": "CustomResource",
"id": "12345",
"meta": {
"profile": [
"http://hl7.org/fhir/profiles/custom-resource"
]
},
"televisionDate": [
"2015-01-01"
],
"televisionCustomDatatype": [
{
"date": "2019-01-14T11:49:44-05:00",
"kittens": "FOO"
}
],
"dogs": "Some Dogs"
}

How to set the default namespace or how to define the #key values when using google-api and using Atom serialization/parser

I'm having trouble with Atom parsing/serializing - clearly something related to the namespace and the default alias - but I can;t figure out what I'm doing wrong.
I have two methods - one that I'm trying to do a GET and see if an album is defined and what that tries to do a POST to create the album (if it does not exist).
The GET I managed to get working - although there too I'm pretty sure I am doing something wrong because it is different from the PicasaAndroidSample. Specifically, if I define:
public class EDAlbum {
#Key("atom:title")
public String title;
#Key("atom:summary")
public String summary;
#Key("atom:gphoto:access")
public String access;
#Key("atom:category")
public EDCategory category = EDCategory.newKind("album");
}
Then the following code does indeed get all the albums:
PicasaUrl url = PicasaUrl.relativeToRoot("feed/api/user/default");
HttpRequest request = EDApplication.getRequest(url);
HttpResponse res = request.execute();
EDAlbumFeed feed = res.parseAs(EDAlbumFeed.class);
boolean hasEDAlbum = false;
for (EDAlbum album : feed.items) {
if (album.title.equals(EDApplication.ED_ALBUM_NAME)) {
hasEDAlbum = true;
break;
}
}
But - if instead I have:
public class EDAlbum {
#Key("title")
public String title;
#Key("summary")
public String summary;
#Key("gphoto:access")
public String access;
#Key("category")
public EDCategory category = EDCategory.newKind("album");
}
Then the feed has an empty collection - i.e. the parser does not know that this is Atom (my guess).
I can live with the android:title in my classes - I don;t get it, but it works.
The problem is that I can't get the POST to wok (to create the album). This code is:
EDAlbum a = new EDAlbum();
a.access = "public";
a.title = EDApplication.ED_ALBUM_NAME;
a.summary = c.getString(R.string.ed_album_summary);
AtomContent content = new AtomContent();
content.entry = a;
content.namespaceDictionary = EDApplication.getNamespaceDictionary();
PicasaUrl url = PicasaUrl.relativeToRoot("feed/api/user/default");
HttpRequest request = EDApplication.postRequest(url, content);
HttpResponse res = request.execute();
The transport and namespace are:
private static final HttpTransport transport = new ApacheHttpTransport(); // my libraries don;t include GoogleTransport.
private static HttpRequestFactory createRequestFactory(final HttpTransport transport) {
return transport.createRequestFactory(new HttpRequestInitializer() {
public void initialize(HttpRequest request) {
AtomParser parser = new AtomParser();
parser.namespaceDictionary = getNamespaceDictionary();
request.addParser(parser);
}
});
}
public static XmlNamespaceDictionary getNamespaceDictionary() {
if (nsDictionary == null) {
nsDictionary = new XmlNamespaceDictionary();
nsDictionary.set("", "http://www.w3.org/2005/Atom");
nsDictionary.set("atom", "http://www.w3.org/2005/Atom");
nsDictionary.set("exif", "http://schemas.google.com/photos/exif/2007");
nsDictionary.set("gd", "http://schemas.google.com/g/2005");
nsDictionary.set("geo", "http://www.w3.org/2003/01/geo/wgs84_pos#");
nsDictionary.set("georss", "http://www.georss.org/georss");
nsDictionary.set("gml", "http://www.opengis.net/gml");
nsDictionary.set("gphoto", "http://schemas.google.com/photos/2007");
nsDictionary.set("media", "http://search.yahoo.com/mrss/");
nsDictionary.set("openSearch", "http://a9.com/-/spec/opensearch/1.1/");
nsDictionary.set("xml", "http://www.w3.org/XML/1998/namespace");
}
return nsDictionary;
}
If I use
#Key("title")
public String title;
then I get an exception that it does not have a default namespace:
W/System.err( 1957): java.lang.IllegalArgumentException: unrecognized alias: (default)
W/System.err( 1957): at com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
W/System.err( 1957): at com.google.api.client.xml.XmlNamespaceDictionary.getNamespaceUriForAliasHandlingUnknown(XmlNamespaceDictionary.java:288)
W/System.err( 1957): at com.google.api.client.xml.XmlNamespaceDictionary.startDoc(XmlNamespaceDictionary.java:224)
and if I use
#Key("atom:title")
public String title;
then it does serialize but each element has the atom: prefix and the call fails - when I to a tcpdump on it I see something like
.`....<? xml vers
ion='1.0 ' encodi
ng='UTF- 8' ?><at
om:entry xmlns:a
tom="htt p://www.
w3.org/2 005/Atom
"><atom: category
scheme= "http://
schemas. google.c
om/g/200 5#kind"
term="ht tp://sch
emas.goo gle.com/
photos/2 007#albu
m" /><at om:gphot
o:access >public<
/atom:gp hoto:acc
....
What do I need to do different in order to use
#Key("title")
public String title;
and have both the GET and the POST manage the namespace?
It looks you are adding either duplicate dictonary keys or keys that are not understood by the serializer.
Use the following instead.
static final XmlNamespaceDictionary DICTIONARY = new XmlNamespaceDictionary()
.set("", "http://www.w3.org/2005/Atom")
.set("activity", "http://activitystrea.ms/spec/1.0/")
.set("georss", "http://www.georss.org/georss")
.set("media", "http://search.yahoo.com/mrss/")
.set("thr", "http://purl.org/syndication/thread/1.0");
Removing the explicit set for the "atom" item namespace solved this issue for me.

Resources