Google calendar query returns at most 25 entries - gdata-api

I'm trying to delete all calendar entries from today forward. I run a query then call getEntries() on the query result. getEntries() always returns 25 entries (or less if there are fewer than 25 entries on the calendar). Why aren't all the entries returned? I'm expecting about 80 entries.
As a test, I tried running the query, deleting the 25 entries returned, running the query again, deleting again, etc. This works, but there must be a better way.
Below is the Java code that only runs the query once.
CalendarQuery myQuery = new CalendarQuery(feedUrl);
DateFormat dfGoogle = new SimpleDateFormat("yyyy-MM-dd'T00:00:00'");
Date dt = Calendar.getInstance().getTime();
myQuery.setMinimumStartTime(DateTime.parseDateTime(dfGoogle.format(dt)));
// Make the end time far into the future so we delete everything
myQuery.setMaximumStartTime(DateTime.parseDateTime("2099-12-31T23:59:59"));
// Execute the query and get the response
CalendarEventFeed resultFeed = service.query(myQuery, CalendarEventFeed.class);
// !!! This returns 25 (or less if there are fewer than 25 entries on the calendar) !!!
int test = resultFeed.getEntries().size();
// Delete all the entries returned by the query
for (int j = 0; j < resultFeed.getEntries().size(); j++) {
CalendarEventEntry entry = resultFeed.getEntries().get(j);
entry.delete();
}
PS: I've looked at the Data API Developer's Guide and the Google Data API Javadoc. These sites are okay, but not great. Does anyone know of additional Google API documentation?

You can increase the number of results with myQuery.setMaxResults(). There will be a maximum maximum though, so you can make multiple queries ('paged' results) by varying myQuery.setStartIndex().
http://code.google.com/apis/gdata/javadoc/com/google/gdata/client/Query.html#setMaxResults(int)
http://code.google.com/apis/gdata/javadoc/com/google/gdata/client/Query.html#setStartIndex(int)

Based on the answers from Jim Blackler and Chris Kaminski, I enhanced my code to read the query results in pages. I also do the delete as a batch, which should be faster than doing individual deletions.
I'm providing the Java code here in case it is useful to anyone.
CalendarQuery myQuery = new CalendarQuery(feedUrl);
DateFormat dfGoogle = new SimpleDateFormat("yyyy-MM-dd'T00:00:00'");
Date dt = Calendar.getInstance().getTime();
myQuery.setMinimumStartTime(DateTime.parseDateTime(dfGoogle.format(dt)));
// Make the end time far into the future so we delete everything
myQuery.setMaximumStartTime(DateTime.parseDateTime("2099-12-31T23:59:59"));
// Set the maximum number of results to return for the query.
// Note: A GData server may choose to provide fewer results, but will never provide
// more than the requested maximum.
myQuery.setMaxResults(5000);
int startIndex = 1;
int entriesReturned;
List<CalendarEventEntry> allCalEntries = new ArrayList<CalendarEventEntry>();
CalendarEventFeed resultFeed;
// Run our query as many times as necessary to get all the
// Google calendar entries we want
while (true) {
myQuery.setStartIndex(startIndex);
// Execute the query and get the response
resultFeed = service.query(myQuery, CalendarEventFeed.class);
entriesReturned = resultFeed.getEntries().size();
if (entriesReturned == 0)
// We've hit the end of the list
break;
// Add the returned entries to our local list
allCalEntries.addAll(resultFeed.getEntries());
startIndex = startIndex + entriesReturned;
}
// Delete all the entries as a batch delete
CalendarEventFeed batchRequest = new CalendarEventFeed();
for (int i = 0; i < allCalEntries.size(); i++) {
CalendarEventEntry entry = allCalEntries.get(i);
BatchUtils.setBatchId(entry, Integer.toString(i));
BatchUtils.setBatchOperationType(entry, BatchOperationType.DELETE);
batchRequest.getEntries().add(entry);
}
// Get the batch link URL and send the batch request
Link batchLink = resultFeed.getLink(Link.Rel.FEED_BATCH, Link.Type.ATOM);
CalendarEventFeed batchResponse = service.batch(new URL(batchLink.getHref()), batchRequest);
// Ensure that all the operations were successful
boolean isSuccess = true;
StringBuffer batchFailureMsg = new StringBuffer("These entries in the batch delete failed:");
for (CalendarEventEntry entry : batchResponse.getEntries()) {
String batchId = BatchUtils.getBatchId(entry);
if (!BatchUtils.isSuccess(entry)) {
isSuccess = false;
BatchStatus status = BatchUtils.getBatchStatus(entry);
batchFailureMsg.append("\nID: " + batchId + " Reason: " + status.getReason());
}
}
if (!isSuccess) {
throw new Exception(batchFailureMsg.toString());
}

There is a small quote on the API page
http://code.google.com/apis/calendar/data/1.0/reference.html#Parameters
Note: The max-results query parameter for Calendar is set to 25 by default,
so that you won't receive an entire
calendar feed by accident. If you want
to receive the entire feed, you can
specify a very large number for
max-results.
So to get all events from a google calendar feed, we do this:
google.calendarurl.com/.../basic?max-results=999999
in the API you can also query with setMaxResults=999999

I got here while searching for a Python solution;
Should anyone be stuck in the same way, the important line is the fourth:
query = gdata.calendar.service.CalendarEventQuery(cal, visibility, projection)
query.start_min = start_date
query.start_max = end_date
query.max_results = 1000

Unfortunately, Google is going to limit the maximum number of queries you can retrieve. This is so as to keep the query governor in their guidelines (HTTP requests not allowed to take more than 30 seconds, for example). They've built their whole architecture around this, so you might as well build the logic as you have.

Related

How to get a list of links of a Wikipedia page (article) and the number of how many people clicked on a link?

This is my code so far. Now I have a list of how many people looked at the page (article) but I wondered if it's possible to make a list of links of a wikipedia page (article) and how many times there is clicked on the links?
String[] articles = {"Hitler", "SOA", "Albert_Einstein"};
void setup() {
for (int i = 0; i < articles.length; i++) {
String article = articles[i];
String start = "20160101"; // YYYYMMDD
String end = "20170101"; // YYYYMMDD
// documentation: https://wikimedia.org/api/rest_v1/?doc#!/Pageviews_data/get_metrics_pageviews_per_article_project_access_agent_article_granularity_start_end
String query = "http://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia/all-access/all-agents/"+article+"/daily/"+start+"/"+end;
JSONObject json = loadJSONObject(query);
JSONArray items = json.getJSONArray("items");
int totalviews = 0;
for (int j = 0; j < items.size(); j++) {
JSONObject item = items.getJSONObject(j);
int views = item.getInt("views");
totalviews += views;
}
println(article+" "+totalviews);
}
}
To get the links from an article use action=query in the API together with props=links.
In your example: https://en.wikipedia.org/w/api.php?action=query&format=json&prop=links&meta=&titles=Albert+Einstein%7CHitler%7CSOA&pllimit=500
Do note that this is not all results (you can only get 500 at a time) so you need to make more requests using the plcontinue you get as a parameter in the new request.
Break your problem down into smaller steps.
Create a single program that just returns all of the links on a wikipedia page. Make sure you have that program working perfectly, and post an MCVE if you get stuck.
Separately from that program, create a separate program that takes a hardcoded URL and returns the number of views that URL has. Again, post an MCVE if you get stuck. When you get that working, move up to a program that takes a hardcoded ArrayList of URLs and returns the pageviews for each URL.
Then when you have them both working separately, then you can start thinking about combining them.

Browse all documents and bulk update some of them

I am using the Jest client for Elastic to browse an index of document to update one field. My workflow is to run an empty query with paging and look if I can compute the extra field. If I can, I update the relevant documents in one bulk update.
Pseudo-code
private void process() {
int from = 0
int size = this.properties.batchSize
boolean moreResults = true
while (moreResults) {
moreResults = handleBatch(from, this.properties.batchSize)
from += size
}
}
private boolean handleBatch(int from, int size) {
log.info("Processing records $from to " + (from + size))
def result = search(from, size)
if (result.isSucceeded()) {
// Check each element and perform an upgrade
}
// return true if the query returned at least one item
}
private SearchResult search(int from, int size) {
String query =
'{ "from": ' + from + ', ' +
'"size": ' + size + '}'
Search search = new Search.Builder(query)
.addIndex("my-index")
.addType('my-document')
.build();
jestClient.execute(search)
}
I don't have any error but when I run the batch several times, it looks like is finding "new" documents to upgrade while the total number of documents hasn't changed. I got the suspicion that an updated document was processed several times which I could confirm by checking the processed IDs.
How can I run a query so that the original documents are the ones processed and any update wouldn't interfere with it?
Instead of running a normal search (i.e. using from+size), you need to run a scroll search query instead. The main difference is that the scroll will freeze a given snapshot of documents (at the time of the query) and query them. Whatever changes happen after the first scroll query, won't be considered.
Using Jest, you need to modify your code to look more like this:
// 1. Initiate the scroll request
Search search = new Search.Builder(searchSourceBuilder.toString())
.addIndex("my-index")
.addType("my-document")
.addSort(new Sort("_doc"))
.setParameter(Parameters.SIZE, size)
.setParameter(Parameters.SCROLL, "5m")
.build();
JestResult result = jestClient.execute(search);
// 2. Get the scroll_id to use in subsequent request
String scrollId = result.getJsonObject().get("_scroll_id").getAsString();
// 3. Issue scroll search requests until you have retrieved all results
boolean moreResults = true;
while (moreResults) {
SearchScroll scroll = new SearchScroll.Builder(scrollId, "5m")
.setParameter(Parameters.SIZE, size).build();
result = client.execute(scroll);
def hits = result.getJsonObject().getAsJsonObject("hits").getAsJsonArray("hits");
moreResults = hits.size() > 0;
}
You need to modify your process and handleBatch methods with the above code. It should be straightforward, let me know if not.

How to retrieve total view count of large number of pages combined from the GA API

We are interested in the statistics of the different pages combined from the Google Analytics core reporting API. The only way I found to query statistics multiple pages at the same is by creating a filter like so:
ga:pagePath==page?id=a,ga:pagePath==page?id=b,ga:pagePath==page?id=c
And this get escaped inside the filter parameter of the GET query.
However when the GET query gets over 2000 characters I get the following response:
414. That’s an error.
The requested URL /analytics/v3/data/ga... is too large to process. That’s all we know.
Note that just like in the example call the only part that is different per page is a GET parameter in the pagePath, but we have to OR a new filter specifying both the metric (pagePath) as well as the part of the path that is always identical.
Is there any way to specify a large number of different pages to query without hitting this limit in the GET query (I can't find any documentation for doing POST requests)? Or are there alternatives to creating batches of a max of X different pages per query and adding them up on my end?
Instead of using ga:pagePath as part of a filter you should use it as a dimension. You can get up to 10,000 rows per query this way and paginate to get all results. Then parse the results client side to get what you need. Additionally use a filter to scope the results down if possible based on your site structure or page names.
I am sharing a sample code where you can fetch more then 10,000 record data via help of Items PerPage
private void GetDataofPpcInfo(DateTime dtStartDate, DateTime dtEndDate, AnalyticsService gas, List<PpcReportData> lstPpcReportData, string strProfileID)
{
int intStartIndex = 1;
int intIndexCnt = 0;
int intMaxRecords = 10000;
var metrics = "ga:impressions,ga:adClicks,ga:adCost,ga:goalCompletionsAll,ga:CPC,ga:visits";
var r = gas.Data.Ga.Get("ga:" + strProfileID, dtStartDate.ToString("yyyy-MM-dd"), dtEndDate.ToString("yyyy-MM-dd"),
metrics);
r.Dimensions = "ga:campaign,ga:keyword,ga:adGroup,ga:source,ga:isMobile,ga:date";
r.MaxResults = 10000;
r.Filters = "ga:medium==cpc;ga:campaign!=(not set)";
while (true)
{
r.StartIndex = intStartIndex;
var dimensionOneData = r.Fetch();
dimensionOneData.ItemsPerPage = intMaxRecords;
if (dimensionOneData != null && dimensionOneData.Rows != null)
{
var enUS = new CultureInfo("en-US");
intIndexCnt++;
foreach (var lstFirst in dimensionOneData.Rows)
{
var objPPCReportData = new PpcReportData();
objPPCReportData.Campaign = lstFirst[dimensionOneData.ColumnHeaders.IndexOf(dimensionOneData.ColumnHeaders.FirstOrDefault(h => h.Name == "ga:campaign"))];
objPPCReportData.Keywords = lstFirst[dimensionOneData.ColumnHeaders.IndexOf(dimensionOneData.ColumnHeaders.FirstOrDefault(h => h.Name == "ga:keyword"))];
lstPpcReportData.Add(objPPCReportData);
}
intStartIndex = intIndexCnt * intMaxRecords + 1;
}
else break;
}
}
Only one thing is problamatic that your query length shouldn't exceed around 2000 odd characters

LinqToExcel - Need to start at a specific row

I'm using the LinqToExcel library. Working great so far, except that I need to start the query at a specific row. This is because the excel spreadsheet from the client uses some images and "header" information at the top of the excel file before the data actually starts.
The data itself will be simple to read and is fairly generic, I just need to know how to tell the ExcelQueryFactory to start at a specific row.
I am aware of the WorksheetRange<Company>("B3", "G10") option, but I don't want to specify an ending row, just where to start reading the file.
Using the latest v. of LinqToExcel with C#
I just tried this code and it seemed to work just fine:
var book = new LinqToExcel.ExcelQueryFactory(#"E:\Temporary\Book1.xlsx");
var query =
from row in book.WorksheetRange("A4", "B16384")
select new
{
Name = row["Name"].Cast<string>(),
Age = row["Age"].Cast<int>(),
};
I only got back the rows with data.
I suppose that you already solved this, but maybe for others - looks like you can use
var excel = new ExcelQueryFactory(path);
var allRows = excel.WorksheetNoHeader();
//start from 3rd row (zero-based indexing), length = allRows.Count() or computed range of rows you want
for (int i = 2; i < length; i++)
{
RowNoHeader row = allRows.ElementAtOrDefault(i);
//process the row - access columns as you want - also zero-based indexing
}
Not as simple as specifying some Range("B3", ...), but also the way.
Hope this helps at least somebody ;)
I had tried this, works fine for my scenario.
//get the sheets info
var faceWrksheet = excel.Worksheet(facemechSheetName);
// get the total rows count.
int _faceMechRows = faceWrksheet.Count();
// append with End Range.
var faceMechResult = excel.WorksheetRange<ExcelFaceMech>("A5", "AS" + _faceMechRows.ToString(), SheetName).
Where(i => i.WorkOrder != null).Select(x => x).ToList();
Have you tried WorksheetRange<Company>("B3", "G")
Unforunatly, at this moment and iteration in the LinqToExcel framework, there does not appear to be any way to do this.
To get around this we are requiring the client to have the data to be uploaded in it's own "sheet" within the excel document. The header row at the first row and the data under it. If they want any "meta data" they will need to include this in another sheet. Below is an example from the LinqToExcel documentation on how to query off a specific sheet.
var excel = new ExcelQueryFactory("excelFileName");
var oldCompanies = from c in repo.Worksheet<Company>("US Companies") //worksheet name = 'US Companies'
where c.LaunchDate < new DateTime(1900, 0, 0)
select c;

How to do paging with Exchange Web Services CalendarView

If I do this:
_calendar = (CalendarFolder)Folder.Bind(_service, WellKnownFolderName.Calendar);
var findResults = _calendar.FindAppointments(
new CalendarView(startDate.Date, endDate.Date)
);
I sometimes get an exception that too many items were found.
"You have exceeded the maximum number of objects that can be returned for the find operation. Use paging to reduce the result size and try your request again."
CalendarView supports a constructor that will let me specify MaxItemsReturned, but I can't figure out how I would call it again, specifying the offset for paging. ItemView has this constructor:
public ItemView(int pageSize, int offset)
And the usage of that is obvious.
What about CalendarView? How does one do paging with a CalendarView?
I could reduce the date range to be a shorter span, but there's still no way of determining if it will work for sure.
CalendarView is not actually derived from PagedView, so all of the paging logic that you expect isn't possible. MaxItemsReturned is more of an upper limit than a page size. The error that's returned is more relevant to the PagedView derived view types.
I played around with some PowerShell to emulate paging by rolling the CalendarView window based on the last item returned, but unfortunately the logic behind the CalendarView and Appointment expansion make it impossible to get exactly what you need. Basically as it does the expansion, it's going to stop at "N" items, but you might have more than one appointment that starts at the exact same time and it may give you one, but not the rest. Also, any appointments that overlap the window will get included, so the below code would go into an infinite loop if you had 50 appointments on the calendar that all had the same start time.
Add-Type -Path "C:\Program Files\Microsoft\Exchange\Web Services\1.2\Microsoft.Exchange.WebServices.dll"
$service = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService
$cred = New-Object Microsoft.Exchange.WebServices.Data.WebCredentials ($user , $passwd)
$service.UseDefaultCredentials = $false
$service.Credentials = $cred
$service.AutodiscoverUrl($user)
$num=50
$total=0
$propsetfc = [Microsoft.Exchange.WebServices.Data.BasePropertySet]::FirstClassProperties
$calfolder = [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Calendar
$service.UserAgent = "EWSCalViewTest"
$calview = New-Object Microsoft.Exchange.WebServices.Data.CalendarView("1/1/2012","12/31/2012", $num)
$calview.PropertySet = $propsetfc
do {
$findresults = $service.FindAppointments($calfolder,$calview)
write-host "Found:" $findresults.Items.Count "of" $findresults.TotalCount
$calview.StartDate = $findresults.Items[$findresults.Items.Count-1].Start
$total+=$findresults.Items.Count
} while($findresults.MoreAvailable)
write-host $total "total found (including dups)"
Unfortunately the expansion and overlap logic mean you'll get duplicates this way, at least one duplicate for each call beyond the first.
If I had to write code using CalendarView, I'd probably use a MaxItemsReturned of 1000 (this is also the limit that throws you into the error condition if you don't specify MaxItemsReturned). If you get them all in one call, you're good. If you have to make a second call, then you'll have to do some extra work to dedup the result set. I'd also try to limit the burden on the server by using as narrow of a date window as possible in the CalendarView since you're asking Exchange to calculate the expansion of recurring appointments across the entire time span. It can be a fairly expensive operation for the server.
You can use ItemView and SearchFilter to query appointments:
var itemView = new ItemView(100, 0);
itemView.PropertySet = new PropertySet(BasePropertySet.IdOnly,
ItemSchema.Subject, AppointmentSchema.Start, AppointmentSchema.End);
var filter = new SearchFilter.SearchFilterCollection(LogicalOperator.And,
new SearchFilter.IsEqualTo(ItemSchema.ItemClass, "IPM.Appointment"),
new SearchFilter.IsGreaterThanOrEqualTo(AppointmentSchema.Start, startDate),
new SearchFilter.IsLessThan(AppointmentSchema.Start, endDate));
bool moreAvailable = true;
while (moreAvailable)
{
var result = _service.FindItems(WellKnownFolderName.Calendar, filter, itemView);
foreach (var appointment in result.OfType<Appointment>())
{
DateTime start = appointment.Start;
DateTime end = appointment.End;
string subject = appointment.Subject;
// ...
}
itemView.Offset += itemView.PageSize;
moreAvailable = result.MoreAvailable;
}
You can still paginate the FindAppointments function manipulating the CalendarView start dates.
var cal = CalendarFolder.Bind(_service, WellKnownFolderName.Calendar);
var cv = new CalendarView(start, end, 1000);
var appointments = new List<Appointment>();
var result = cal.FindAppointments(cv);
appointments.AddRange(result);
while (result.MoreAvailable)
{
cv.StartDate = appointments.Last().Start;
result = cal.FindAppointments(cv);
appointments.AddRange(result);
}
Though I don't know if they come in order. If they don't you might have to use the last envent start date and remove the duplicates.

Resources