Fhir client against hapi server never returns more than 2000 records - hl7-fhir

I'm using the .NET Fhir client against a Smile (hapi) CDR Fhir server. I have 3235 patients that I'm trying to retrieve using the code below, but never get more than 2000 exactly. I have tried adding headers using and not using the no-cache option. I know the server has more records because I issue a Patient/?_summary=count that gives me the total number of records I'm expecting (3235).
I've disabled server cache, refreshed indexes, but always get exactly 2000 records. I've also tried different methods of retrieving patients by using Get() verses Search() methods, but both come up with the same result. Can anyone suggest another way to get the correct number of patients back or hints as to what I may be doing wrong?
var patients = new List<Patient>();
var bundle = (Bundle)client.Get("Patient");
while (bundle != null)
{
patients.AddRange(bundle.Entry.Select(e => e.Resource as Patient));
bundle = client.Continue(bundle);
}
I've tried using several variations of including the cache control header but counts remain the same.
client.OnBeforeRequest += (object sender, BeforeRequestEventArgs e) =>
{
e.RawRequest.Headers.Clear();
e.RawRequest.Headers.Add("Accept", "application/fhir+json;fhirVersion=4.0");
e.RawRequest.Headers.Add("Cache-Control", "no-cache");
};

The server has the right to impose an upper limit of data it will return in a single call, regardless of how many records are requested by the client. The only way to retrieve more data is to page through it using the 'next' link, as defined in the FHIR spec.

Related

How to structure Shopify data into a Firestore collection that can be queried efficiently

The Background
In an attempt to build some back-end services for my e-commerce (Shopify based) site I have set up a Firestore trigger that writes order details with every new order created which is updated by a web hook POST function provided by Shopify - (orders/Create webhook).
My current cloud function -
exports.saveOrderDetails = functions.https.onRequest((req, res) => {
var docRef = db.collection('orders').doc(req.body.name);
const details = req.body;
var setData = docRef.set(req.body).then( a =>{
res.status(200).send();
});
});
Which is able to capture the data from the webhook and store it in the order number's "name" document within my "orders" collection. This is how it looks in Firestore:
My question is - with the help of body-parser (already parsing out "name" which is represented as #9999 in my screenshot, to set my document name value) - how could I improve my cloud function to handle storing this webhook POST in a better data structure for Firestore and to query it later?
After reviewing the comments on this question, I moved this question over to Firebase-Talk and it appears the feature I am attempting here would be close to what is known as "collection group queries" and was informed I should adjust my data model approach since this feature is currently still on the road map - and perhaps look into the Firestore REST API as suggested by #jason-berryman
Besides the REST APi, #frank-van-puffelen made a great suggestion to look into working with Arrays, Lists, Sets for Firebase/Firestore
Another approach that could mitigate this in my scenario is to have my HTTP Firestore cloud trigger have multiple parsing arguments that create top more top level documents - however this could cause a point of scaling failure or an increase of cost factor due to putting more parsing processing logic in my cloud function and adding additional latency...
I will mark my question as answered for the time being to hopefully help others to understand how to work with documents in a single collection in Firestore and not attempt to query groups of collections before they get too far into modelling and need to restructure their app.

Get all members from the mailing list using MailChimp API 3.0

http://kb.mailchimp.com/api/resources/lists/members/lists-members-collection
Using this resource we can obtain only first 10 members. How to get all?
The answer is quite simple - use offset and count parameters in URL query:
https://us10.api.mailchimp.com/3.0/lists/b5b5fdc2fa/members?offset=150&count=10
Finally I found PHP API client for MailChimp API v3:
https://github.com/pacely/mailchimp-api-v3
And official docs about pagination.. I missed it before :(
http://kb.mailchimp.com/api/article/api-3-overview
I stumbled on this one while researching a way to get all list members in MC API 3.0 as well. I noticed that there were some comments on the API timing out when trying to get all list members on one page. I also encountered this at first but was able to overcome it by limiting the fields in the result by using the 'fields' param. My code is for a mass deleter so all I really needed was the ID of each member to put together a batch delete request. Here's how my fetch request looks (psuedo-code):
$total_members = $result['total_items'];//get number of members in list via previous request
https://usXX.api.mailchimp.com/3.0/lists/foobarx/members?fields=members.id&count=total_members
This way I'm able to fetch over 15,000 subscribers on one page without error.
offset and count is the official way on the docs, but the problem is that has linear slowdown. It appears to be an n^2 solution, so if you have 20,000 items, you're in trouble. Their docs http://developer.mailchimp.com/documentation/mailchimp/reference/lists/members/#read-get_lists_list_id_members warn you against using offset.
If you're scenario permits you to use other filters (like since_last_changed), then you can do it quickly. See What is the right syntax for "timeframe" in MailChimp API 3.0 for format for datetime.
Using offset and count parameters are correct as mentioned in some of the other answers, but becomes tedious for large lists.
A more efficient way, is to use a client for the MailChimp API. I used mailchimp3 for python. Using this, it's pretty easy to get all members on your list because it handles the pagination. Here's how you would do it.
from mailchimp3 import MailChimp
client = MailChimp('YOUR_USERNAME', 'YOUR_SECRET_KEY')
client.lists.members.all('YOUR_LIST_ID', get_all=True, fields="members.email_address")
You can do it just with count, making an API call to the list root so in the next API call you include the count parameter and you have all your list members.
I ran into issues with this because I had a moderate list with 2600 members and MailChimp was throwing an error, but it worked with 1500 people.
So for a list bigger than 1500 members I use MailChimp export API bare in mind that this is going to get discontinued but I could not find any other acceptable solutions.
Alternatively for bigger lists (>1500) you could get the total of members and then make multiple api calls to the Member endpoint but I really dislike that :(
If anyone has a better alternative I would be really glad to hear it.
With MailChimp.Net.
Use the offset value.
List<Member> listMembers = new List<Member>();
IMailChimpManager manager = new MailChimpManager(MailChimpApiKey);
bool moreAvailable = true;
int offset = 0;
while (moreAvailable)
{
var listMembers = manager.Members.GetAllAsync(yourListId, new MemberRequest
{
Status = Status.Subscribed,
Limit = 250,
Offset = offset
}).ConfigureAwait(false);
var Allmembers = listMembers.GetAwaiter().GetResult();
foreach(Member member in Allmembers)
{
listMembers.Add(member);
}
if (Allmembers.Count() == 250)
//if the count is < of 250 then it means that there aren't more results
offset += 250;
else
moreAvailable = false;
}

Handling Exchange Web Services (EWS) missing properties

I'm relatively comfortable with EWS programming and Exchange schemas, but running into an interesting problem to handle.
I have a propertyset, asking for:
ItemClass
DateTimeReceived
LastModifiedTime
Size
Every Item in the AllItems folder at the root.
I get the result set, and then attempt Linq queries against the set, particular to the DateTimeReceived. All Items don't have a DateTimeReceived returned by the server, and they except. I'm trying a...
long msgCount = (from msg in allItems
where !msg.DateTimeReceived.Equals(null)
select msg).Count();
... which (IMO) should return the count of allItems that have a DateTimeReceived. However, the property isn't null; it's just not there, throwing an exception.
I'm trying to avoid iterating through the set one by one, trying each record. Anyone have a thought or experience?
Thanks TTY for the input that definitely lead to the following code that returns what I need. (Still in final testing)
List<EWS.Item> noReceivedProperty = inputlist.Where(m => (m.GetType().GetProperty("DateTimeReceived") != null)).ToList<EWS.Item>();
Then of course, take noReceivedProperty.Count or such as needed.

Why does HasNext() return false, when

I am using libRets for .NET, and querying http://retsgw.flexmls.com/rets2_1/, using a valid user account. From C#, after calling Search() I check the count using GetCount() and I get 6300 results, but when I call HasNext() the first time returns false.
Checking the XML response, it looks like the results are empty () even though the result count provides a number.
So... where did the results go?
The exact query is the following:
http://retsgw.flexmls.com/rets2_1/Search?Class=OpenHouse&Count=1&QueryType=DMQL2&SearchType=OpenHouse&Select=ListingID&StandardNames=1
Here is the request:
SearchRequest request = client.CreateSearchRequest("OpenHouse", "OpenHouse", "");
request.SetStandardNames(true);
request.SetSelect("ListingID");
Here is how the request is made:
SearchResultSet result = client.Search(request);
Here is how the result is handled:
while (result.HasNext()) {
// Do something
}
So, it looks like the FlexMLS Support was able to help (rather quickly).
I needed to add &Format=COMPACT-DECODED to the query string.
So, in the code it would look like this:
request.SetFormatType(SearchRequest.FormatType.COMPACT_DECODED);
1) You are setting StandardNames to true AND then setting a selection. That selection may not exist in StandardNames. (You're reviewed the metadata returned by the server, right?) Its possible the server doesn't take the select into account when doing the count, but does on a full query, therefor it doesn't have any information to send back because it doesn't have a Table matching what you selected. What happens when you don't set the Select?
2) Have you done a packet trace or set libRETS to log the network traffic to a file? (I can't tell if that's what you mean by "Checking the XML response, it looks like the results are empty () even though the result count provides a number.") If you haven't, do that and see if the server is passing back any information.
If the server is passing pack information, you might have discovered a bug in libRETS and I invite you to join the libRETS-users mailing list and share this data (and that network trace) there.
If the server is passing back 0 results, you'll may need to contact your MLS and/or FlexMLS to see if you don't have permissions to view the results. Some RETS servers have fine grained results where you could get the count, but not get the data.

Meteor: server side sort in publication not behaving as expected on page change

I have the following setup: Im listing all items on the frontpage (publication frontpageItems) and listing selected items on a userpage (publication userpageItems). Im sorting the items on both pages on a lastactivity field and im doing this on the server side publication and not on the client side since I want the frontpage to be static once loaded.
Whenever the page loads initialy everything sorts fine, ie: 1,2,3.
When I navigate from frontpage to the user I have a subset of 2,3 for example
When I navigate back to frontpage the sort is as
follows: 2,3,1
I assume this is because meteor caches the items, but the sort order is definitely wrong here. Refreshing the frontpage makes the sort correct again.
Is there any way to fix this? ie, clear the subscription on page switch for exampe? Im using iron-router btw to subscribe to the publications before page load. Adding client side sorting + reactive:false on the client solves my problem btw, but I cant use this since I DO need reactivity on the limit of the subscription for pagination/infinite scrolling.
Or, as a workaround, is it possible to disable reactivity on the client for sort, but keep it for limit?
As David mentioned below I do needed sorting on the client so I hold on to that and tried some different directions using my publication to achieve some sort of partial reactivity on the client.
I ended up implementing a publication with an observeChanges pattern and sort on lastactivity on the client side. This publication makes sure that:
Initially all items are send to the client (within the limit ofcourse)
Whenever an item is changed, it removes the lastactivity field and doesnt update that but all other attributes are updated and send over to the client
Whenever an item is added its gets a later lastactivity value then beforeLastactivity variable and thus is not added
Increasing the limit for infinite scrolling keeps working
When a client refreshes everything is send down to the client again because beforeLastactivity gets updated
Meteor.publish('popularPicks', function(limit,beforeLastactivity) {
var init = true;
var self = this;
if(!beforeLastactivity)
var beforeLastactivity = new Date().getTime();
if(!limit) {
var limit = 18;
}
var query = Picks.find({},{
limit: limit,
sort: { lastactivity: -1 }
});
var handle = query.observeChanges({
added: function( id,doc ){
if(init){
if(doc.lastactivity < beforeLastactivity)
self.added( 'picks', id, doc );
}
},
changed: function( id,fields ){
if(fields.lastactivity)
delete fields.lastactivity;
self.changed( 'picks', id, fields );
}
});
var init = false;
self.ready();
self.onStop( function(){
handle.stop();
});
});
As I explained in the answer to this question, sorting in the publish function has no affect on the order of documents on the client. Because you are using a limit, however, sorting on the server does affect which items will be on the client.
I've read the rest of your question a dozen times, and I'm unclear exactly which situations require reactivity and which do not. Based on the phrase: "I want the frontpage to be static once loaded", I'd recommend using a method (instead of a subscription) to load the required items (maybe when the client connects?) and insert the data into a session variable.
Based on our discussion, if you could get an array of ids which you want to have appear on the page you can reactively update them by subscribing to a publish function like this:
Meteor.publish('itemsWithoutLastActivity', function(ids) {
check(ids, [String]);
return Items.find({id: {$in: ids}}, {fields: {lastActivity: 0}});
});
This will publish all items with ids in the given array, but it will not publish the lastActivity property.
However, the documents received from your initial subscription will also still be reactive - here's where it gets really tricky. If you leave that the first subscription running, your sort order will change when items get updated.
One way to deal with this is to not subscribe to the data to begin with. Make a method call to get an ordered set of ids and then subscribe to itemsWithoutLastActivity with those ids. You will need to come up with a creative way to order them on the client - maybe just an {{#each}} which iterates over the ids and each sub-template loads the required item by id.

Resources