How to get a list of links of a Wikipedia page (article) and the number of how many people clicked on a link? - processing

This is my code so far. Now I have a list of how many people looked at the page (article) but I wondered if it's possible to make a list of links of a wikipedia page (article) and how many times there is clicked on the links?
String[] articles = {"Hitler", "SOA", "Albert_Einstein"};
void setup() {
for (int i = 0; i < articles.length; i++) {
String article = articles[i];
String start = "20160101"; // YYYYMMDD
String end = "20170101"; // YYYYMMDD
// documentation: https://wikimedia.org/api/rest_v1/?doc#!/Pageviews_data/get_metrics_pageviews_per_article_project_access_agent_article_granularity_start_end
String query = "http://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia/all-access/all-agents/"+article+"/daily/"+start+"/"+end;
JSONObject json = loadJSONObject(query);
JSONArray items = json.getJSONArray("items");
int totalviews = 0;
for (int j = 0; j < items.size(); j++) {
JSONObject item = items.getJSONObject(j);
int views = item.getInt("views");
totalviews += views;
}
println(article+" "+totalviews);
}
}

To get the links from an article use action=query in the API together with props=links.
In your example: https://en.wikipedia.org/w/api.php?action=query&format=json&prop=links&meta=&titles=Albert+Einstein%7CHitler%7CSOA&pllimit=500
Do note that this is not all results (you can only get 500 at a time) so you need to make more requests using the plcontinue you get as a parameter in the new request.

Break your problem down into smaller steps.
Create a single program that just returns all of the links on a wikipedia page. Make sure you have that program working perfectly, and post an MCVE if you get stuck.
Separately from that program, create a separate program that takes a hardcoded URL and returns the number of views that URL has. Again, post an MCVE if you get stuck. When you get that working, move up to a program that takes a hardcoded ArrayList of URLs and returns the pageviews for each URL.
Then when you have them both working separately, then you can start thinking about combining them.

Related

Angular.js - Data from AJAX request as a ng-repeat collection

In my web app i'm reciving data every 3-4 seconds from an AJAX call to API like this:
$http.get('api/invoice/collecting').success(function(data) {
$scope.invoices = data
}
Then displaying the data, like this: http://jsfiddle.net/geUe2/1/
The problem is that every time i do $scope.invoices = data ng-repeat rebuilds the DOM area which is presented in the jsfiddle, and i lose all <input> values.
I've tried to do:
angular.extend()
deep version of jQuery.extend
some other merging\extending\deep copying functions
but they can't handle the situation like this:
On my client a have [invoice1, invoice2, invoice3] and server sends me [invoice1, invoice3]. So i need invoice2 to be deleted from the view.
What are the ways to solve this problem?
Check the ng-repeat docs Angular.js - Data from AJAX request as a ng-repeat collection
You could use track by option:
variable in expression track by tracking_expression – You can also provide an optional tracking function which can be used to associate the objects in the collection with the DOM elements. If no tracking function is specified the ng-repeat associates elements by identity in the collection. It is an error to have more than one tracking function to resolve to the same key. (This would mean that two distinct objects are mapped to the same DOM element, which is not possible.) Filters should be applied to the expression, before specifying a tracking expression.
For example: item in items track by item.id is a typical pattern when the items come from the database. In this case the object identity does not matter. Two objects are considered equivalent as long as their id property is same.
You need to collect data from DOM when an update from the server arrives. Save whatever data is relevant (it could be only the input values) and don't forget to include the identifier for the data object, such as data._id. All of this should be saved in a temporary object such as $scope.oldInvoices.
Then after collecting it from DOM, re-update the DOM with the new data (the way you are doing right now) $scope.invoices = data.
Now, use underscore.js _.findWhere to locate if your data._id is present in the new data update, and if so - re-assign (you can use Angular.extend here) the data-value that you saved to the relevant invoice.
Came out, that #luacassus 's answer about track by option of ng-repeat directive was very helpful but didn't solve my problem. track by function was adding new invoices coming from server, but some problem with clearing inactive invoices occured.
So, this my solution of the problem:
function change(scope, newData) {
if (!scope.invoices) {
scope.invoices = [];
jQuery.extend(true, scope.invoices, newData)
}
// Search and update from server invoices that are presented in scope.invoices
for( var i = 0; i < scope.invoices.length; i++){
var isInvoiceFound = false;
for( var j = 0; j < newData.length; j++) {
if( scope.invoices[i] && scope.invoices[i].id && scope.invoices[i].id == newData[j].id ) {
isInvoiceFound = true;
jQuery.extend(true, scope.invoices[i], newData[j])
}
}
if( !isInvoiceFound ) scope.invoices.splice(i, 1);
}
// Search and add invoices that came form server, but are nor presented in scope.invoices
for( var j = 0; j < newData.length; j++){
var isInvoiceFound = false;
for( var i = 0; i < scope.invoices.length; i++) {
if( scope.invoices[i] && scope.invoices[i].id && scope.invoices[i].id == newData[j].id ) {
isInvoiceFound = true;
}
}
if( !isInvoiceFound ) scope.invoices.push(newData[j]);
}
}
In my web app i'm using jQuery's .extend() . There's some good alternative in lo-dash library.

How to retrieve total view count of large number of pages combined from the GA API

We are interested in the statistics of the different pages combined from the Google Analytics core reporting API. The only way I found to query statistics multiple pages at the same is by creating a filter like so:
ga:pagePath==page?id=a,ga:pagePath==page?id=b,ga:pagePath==page?id=c
And this get escaped inside the filter parameter of the GET query.
However when the GET query gets over 2000 characters I get the following response:
414. That’s an error.
The requested URL /analytics/v3/data/ga... is too large to process. That’s all we know.
Note that just like in the example call the only part that is different per page is a GET parameter in the pagePath, but we have to OR a new filter specifying both the metric (pagePath) as well as the part of the path that is always identical.
Is there any way to specify a large number of different pages to query without hitting this limit in the GET query (I can't find any documentation for doing POST requests)? Or are there alternatives to creating batches of a max of X different pages per query and adding them up on my end?
Instead of using ga:pagePath as part of a filter you should use it as a dimension. You can get up to 10,000 rows per query this way and paginate to get all results. Then parse the results client side to get what you need. Additionally use a filter to scope the results down if possible based on your site structure or page names.
I am sharing a sample code where you can fetch more then 10,000 record data via help of Items PerPage
private void GetDataofPpcInfo(DateTime dtStartDate, DateTime dtEndDate, AnalyticsService gas, List<PpcReportData> lstPpcReportData, string strProfileID)
{
int intStartIndex = 1;
int intIndexCnt = 0;
int intMaxRecords = 10000;
var metrics = "ga:impressions,ga:adClicks,ga:adCost,ga:goalCompletionsAll,ga:CPC,ga:visits";
var r = gas.Data.Ga.Get("ga:" + strProfileID, dtStartDate.ToString("yyyy-MM-dd"), dtEndDate.ToString("yyyy-MM-dd"),
metrics);
r.Dimensions = "ga:campaign,ga:keyword,ga:adGroup,ga:source,ga:isMobile,ga:date";
r.MaxResults = 10000;
r.Filters = "ga:medium==cpc;ga:campaign!=(not set)";
while (true)
{
r.StartIndex = intStartIndex;
var dimensionOneData = r.Fetch();
dimensionOneData.ItemsPerPage = intMaxRecords;
if (dimensionOneData != null && dimensionOneData.Rows != null)
{
var enUS = new CultureInfo("en-US");
intIndexCnt++;
foreach (var lstFirst in dimensionOneData.Rows)
{
var objPPCReportData = new PpcReportData();
objPPCReportData.Campaign = lstFirst[dimensionOneData.ColumnHeaders.IndexOf(dimensionOneData.ColumnHeaders.FirstOrDefault(h => h.Name == "ga:campaign"))];
objPPCReportData.Keywords = lstFirst[dimensionOneData.ColumnHeaders.IndexOf(dimensionOneData.ColumnHeaders.FirstOrDefault(h => h.Name == "ga:keyword"))];
lstPpcReportData.Add(objPPCReportData);
}
intStartIndex = intIndexCnt * intMaxRecords + 1;
}
else break;
}
}
Only one thing is problamatic that your query length shouldn't exceed around 2000 odd characters

LinqToExcel - Need to start at a specific row

I'm using the LinqToExcel library. Working great so far, except that I need to start the query at a specific row. This is because the excel spreadsheet from the client uses some images and "header" information at the top of the excel file before the data actually starts.
The data itself will be simple to read and is fairly generic, I just need to know how to tell the ExcelQueryFactory to start at a specific row.
I am aware of the WorksheetRange<Company>("B3", "G10") option, but I don't want to specify an ending row, just where to start reading the file.
Using the latest v. of LinqToExcel with C#
I just tried this code and it seemed to work just fine:
var book = new LinqToExcel.ExcelQueryFactory(#"E:\Temporary\Book1.xlsx");
var query =
from row in book.WorksheetRange("A4", "B16384")
select new
{
Name = row["Name"].Cast<string>(),
Age = row["Age"].Cast<int>(),
};
I only got back the rows with data.
I suppose that you already solved this, but maybe for others - looks like you can use
var excel = new ExcelQueryFactory(path);
var allRows = excel.WorksheetNoHeader();
//start from 3rd row (zero-based indexing), length = allRows.Count() or computed range of rows you want
for (int i = 2; i < length; i++)
{
RowNoHeader row = allRows.ElementAtOrDefault(i);
//process the row - access columns as you want - also zero-based indexing
}
Not as simple as specifying some Range("B3", ...), but also the way.
Hope this helps at least somebody ;)
I had tried this, works fine for my scenario.
//get the sheets info
var faceWrksheet = excel.Worksheet(facemechSheetName);
// get the total rows count.
int _faceMechRows = faceWrksheet.Count();
// append with End Range.
var faceMechResult = excel.WorksheetRange<ExcelFaceMech>("A5", "AS" + _faceMechRows.ToString(), SheetName).
Where(i => i.WorkOrder != null).Select(x => x).ToList();
Have you tried WorksheetRange<Company>("B3", "G")
Unforunatly, at this moment and iteration in the LinqToExcel framework, there does not appear to be any way to do this.
To get around this we are requiring the client to have the data to be uploaded in it's own "sheet" within the excel document. The header row at the first row and the data under it. If they want any "meta data" they will need to include this in another sheet. Below is an example from the LinqToExcel documentation on how to query off a specific sheet.
var excel = new ExcelQueryFactory("excelFileName");
var oldCompanies = from c in repo.Worksheet<Company>("US Companies") //worksheet name = 'US Companies'
where c.LaunchDate < new DateTime(1900, 0, 0)
select c;

Google calendar query returns at most 25 entries

I'm trying to delete all calendar entries from today forward. I run a query then call getEntries() on the query result. getEntries() always returns 25 entries (or less if there are fewer than 25 entries on the calendar). Why aren't all the entries returned? I'm expecting about 80 entries.
As a test, I tried running the query, deleting the 25 entries returned, running the query again, deleting again, etc. This works, but there must be a better way.
Below is the Java code that only runs the query once.
CalendarQuery myQuery = new CalendarQuery(feedUrl);
DateFormat dfGoogle = new SimpleDateFormat("yyyy-MM-dd'T00:00:00'");
Date dt = Calendar.getInstance().getTime();
myQuery.setMinimumStartTime(DateTime.parseDateTime(dfGoogle.format(dt)));
// Make the end time far into the future so we delete everything
myQuery.setMaximumStartTime(DateTime.parseDateTime("2099-12-31T23:59:59"));
// Execute the query and get the response
CalendarEventFeed resultFeed = service.query(myQuery, CalendarEventFeed.class);
// !!! This returns 25 (or less if there are fewer than 25 entries on the calendar) !!!
int test = resultFeed.getEntries().size();
// Delete all the entries returned by the query
for (int j = 0; j < resultFeed.getEntries().size(); j++) {
CalendarEventEntry entry = resultFeed.getEntries().get(j);
entry.delete();
}
PS: I've looked at the Data API Developer's Guide and the Google Data API Javadoc. These sites are okay, but not great. Does anyone know of additional Google API documentation?
You can increase the number of results with myQuery.setMaxResults(). There will be a maximum maximum though, so you can make multiple queries ('paged' results) by varying myQuery.setStartIndex().
http://code.google.com/apis/gdata/javadoc/com/google/gdata/client/Query.html#setMaxResults(int)
http://code.google.com/apis/gdata/javadoc/com/google/gdata/client/Query.html#setStartIndex(int)
Based on the answers from Jim Blackler and Chris Kaminski, I enhanced my code to read the query results in pages. I also do the delete as a batch, which should be faster than doing individual deletions.
I'm providing the Java code here in case it is useful to anyone.
CalendarQuery myQuery = new CalendarQuery(feedUrl);
DateFormat dfGoogle = new SimpleDateFormat("yyyy-MM-dd'T00:00:00'");
Date dt = Calendar.getInstance().getTime();
myQuery.setMinimumStartTime(DateTime.parseDateTime(dfGoogle.format(dt)));
// Make the end time far into the future so we delete everything
myQuery.setMaximumStartTime(DateTime.parseDateTime("2099-12-31T23:59:59"));
// Set the maximum number of results to return for the query.
// Note: A GData server may choose to provide fewer results, but will never provide
// more than the requested maximum.
myQuery.setMaxResults(5000);
int startIndex = 1;
int entriesReturned;
List<CalendarEventEntry> allCalEntries = new ArrayList<CalendarEventEntry>();
CalendarEventFeed resultFeed;
// Run our query as many times as necessary to get all the
// Google calendar entries we want
while (true) {
myQuery.setStartIndex(startIndex);
// Execute the query and get the response
resultFeed = service.query(myQuery, CalendarEventFeed.class);
entriesReturned = resultFeed.getEntries().size();
if (entriesReturned == 0)
// We've hit the end of the list
break;
// Add the returned entries to our local list
allCalEntries.addAll(resultFeed.getEntries());
startIndex = startIndex + entriesReturned;
}
// Delete all the entries as a batch delete
CalendarEventFeed batchRequest = new CalendarEventFeed();
for (int i = 0; i < allCalEntries.size(); i++) {
CalendarEventEntry entry = allCalEntries.get(i);
BatchUtils.setBatchId(entry, Integer.toString(i));
BatchUtils.setBatchOperationType(entry, BatchOperationType.DELETE);
batchRequest.getEntries().add(entry);
}
// Get the batch link URL and send the batch request
Link batchLink = resultFeed.getLink(Link.Rel.FEED_BATCH, Link.Type.ATOM);
CalendarEventFeed batchResponse = service.batch(new URL(batchLink.getHref()), batchRequest);
// Ensure that all the operations were successful
boolean isSuccess = true;
StringBuffer batchFailureMsg = new StringBuffer("These entries in the batch delete failed:");
for (CalendarEventEntry entry : batchResponse.getEntries()) {
String batchId = BatchUtils.getBatchId(entry);
if (!BatchUtils.isSuccess(entry)) {
isSuccess = false;
BatchStatus status = BatchUtils.getBatchStatus(entry);
batchFailureMsg.append("\nID: " + batchId + " Reason: " + status.getReason());
}
}
if (!isSuccess) {
throw new Exception(batchFailureMsg.toString());
}
There is a small quote on the API page
http://code.google.com/apis/calendar/data/1.0/reference.html#Parameters
Note: The max-results query parameter for Calendar is set to 25 by default,
so that you won't receive an entire
calendar feed by accident. If you want
to receive the entire feed, you can
specify a very large number for
max-results.
So to get all events from a google calendar feed, we do this:
google.calendarurl.com/.../basic?max-results=999999
in the API you can also query with setMaxResults=999999
I got here while searching for a Python solution;
Should anyone be stuck in the same way, the important line is the fourth:
query = gdata.calendar.service.CalendarEventQuery(cal, visibility, projection)
query.start_min = start_date
query.start_max = end_date
query.max_results = 1000
Unfortunately, Google is going to limit the maximum number of queries you can retrieve. This is so as to keep the query governor in their guidelines (HTTP requests not allowed to take more than 30 seconds, for example). They've built their whole architecture around this, so you might as well build the logic as you have.

Paging a collection with LINQ

How do you page through a collection in LINQ given that you have a startIndex and a count?
It is very simple with the Skip and Take extension methods.
var query = from i in ideas
select i;
var paggedCollection = query.Skip(startIndex).Take(count);
A few months back I wrote a blog post about Fluent Interfaces and LINQ which used an Extension Method on IQueryable<T> and another class to provide the following natural way of paginating a LINQ collection.
var query = from i in ideas
select i;
var pagedCollection = query.InPagesOf(10);
var pageOfIdeas = pagedCollection.Page(2);
You can get the code from the MSDN Code Gallery Page: Pipelines, Filters, Fluent API and LINQ to SQL.
I solved this a bit differently than what the others have as I had to make my own paginator, with a repeater. So I first made a collection of page numbers for the collection of items that I have:
// assumes that the item collection is "myItems"
int pageCount = (myItems.Count + PageSize - 1) / PageSize;
IEnumerable<int> pageRange = Enumerable.Range(1, pageCount);
// pageRange contains [1, 2, ... , pageCount]
Using this I could easily partition the item collection into a collection of "pages". A page in this case is just a collection of items (IEnumerable<Item>). This is how you can do it using Skip and Take together with selecting the index from the pageRange created above:
IEnumerable<IEnumerable<Item>> pageRange
.Select((page, index) =>
myItems
.Skip(index*PageSize)
.Take(PageSize));
Of course you have to handle each page as an additional collection but e.g. if you're nesting repeaters then this is actually easy to handle.
The one-liner TLDR version would be this:
var pages = Enumerable
.Range(0, pageCount)
.Select((index) => myItems.Skip(index*PageSize).Take(PageSize));
Which can be used as this:
for (Enumerable<Item> page : pages)
{
// handle page
for (Item item : page)
{
// handle item in page
}
}
This question is somewhat old, but I wanted to post my paging algorithm that shows the whole procedure (including user interaction).
const int pageSize = 10;
const int count = 100;
const int startIndex = 20;
int took = 0;
bool getNextPage;
var page = ideas.Skip(startIndex);
do
{
Console.WriteLine("Page {0}:", (took / pageSize) + 1);
foreach (var idea in page.Take(pageSize))
{
Console.WriteLine(idea);
}
took += pageSize;
if (took < count)
{
Console.WriteLine("Next page (y/n)?");
char answer = Console.ReadLine().FirstOrDefault();
getNextPage = default(char) != answer && 'y' == char.ToLowerInvariant(answer);
if (getNextPage)
{
page = page.Skip(pageSize);
}
}
}
while (getNextPage && took < count);
However, if you are after performance, and in production code, we're all after performance, you shouldn't use LINQ's paging as shown above, but rather the underlying IEnumerator to implement paging yourself. As a matter of fact, it is as simple as the LINQ-algorithm shown above, but more performant:
const int pageSize = 10;
const int count = 100;
const int startIndex = 20;
int took = 0;
bool getNextPage = true;
using (var page = ideas.Skip(startIndex).GetEnumerator())
{
do
{
Console.WriteLine("Page {0}:", (took / pageSize) + 1);
int currentPageItemNo = 0;
while (currentPageItemNo++ < pageSize && page.MoveNext())
{
var idea = page.Current;
Console.WriteLine(idea);
}
took += pageSize;
if (took < count)
{
Console.WriteLine("Next page (y/n)?");
char answer = Console.ReadLine().FirstOrDefault();
getNextPage = default(char) != answer && 'y' == char.ToLowerInvariant(answer);
}
}
while (getNextPage && took < count);
}
Explanation: The downside of using Skip() for multiple times in a "cascading manner" is, that it will not really store the "pointer" of the iteration, where it was last skipped. - Instead the original sequence will be front-loaded with skip calls, which will lead to "consuming" the already "consumed" pages over and over again. - You can prove that yourself, when you create the sequence ideas so that it yields side effects. -> Even if you have skipped 10-20 and 20-30 and want to process 40+, you'll see all side effects of 10-30 being executed again, before you start iterating 40+.
The variant using IEnumerable's interface directly, will instead remember the position of the end of the last logical page, so no explicit skipping is needed and side effects won't be repeated.

Resources