I'm trying to perform the below request and the results should be around 900 variables, not 100.
it doesn't matter how many oids I send 1 or 10, I always get no more than 100 variables.
what I'm doing wrong?
var readCommunity = new OctetString("XXXXX");
var oidsList = new List<string>
{
"1.3.6.1.2.1.2.2.1.3",
"1.3.6.1.2.1.2.2.1.5",
"1.3.6.1.2.1.2.2.1.6",
"1.3.6.1.2.1.2.2.1.7",
"1.3.6.1.2.1.2.2.1.8",
"1.3.6.1.2.1.2.2.1.2",
"1.3.6.1.2.1.2.2.1.10",
"1.3.6.1.2.1.2.2.1.16",
"1.3.6.1.2.1.2.2.1.14",
"1.3.6.1.2.1.31.1.1.1.6"
};
var oids = oidsList.Select(oid => new Variable(new ObjectIdentifier(oid))).ToArray();
ISnmpMessage request= new GetBulkRequestMessage(
0,
VersionCode.V2,
readCommunity,
0,
1000,
oids);
var response = request.GetResponse(60000, new IPEndPoint(IPAddress.Parse("1.1.1.1"), 161));
You can send a request asking for as many items as you wished, but it is the agent who decides how many to return to you. That's how the standard defines,
The receiving SNMP entity produces a Response-PDU with up to the
total number of requested variable bindings communicated by the
request.
While the maximum number of variable bindings in the Response-PDU is
bounded by N + (M * R), the response may be generated with a lesser
number of variable bindings (possibly zero) for either of three
reasons.
Reference
Related
I have a timed trigger that runs every 15 minutes. A simplified partial version of the script is shown below. The script compiles data from about 50 other spreadsheets and records a row for each spreadsheet, then writes that summary data to the active spreadsheet.
I noticed that in the logs, there is an alternating pattern in the execution times for this script: half the executions take 200-400 seconds, and the other half typically take 700-900 seconds. It's a pretty significant difference, and the pattern persists over the past several days of logs.
There's nothing in the script itself that changes from one execution to the next, so I'm curious if anyone can suggest a reason this would happen (even better if it's a documented reason). For example, is there some sort of caching of the spreadsheet reads so that the next execution gets those values faster?
// The triggered function.
function updateRankings()
{
var rankingSheet = SS.getSheetByName(RANKING_SHEET_NAME) // SS is the active spreadsheet
// Read the id's of the target spreadsheets, which are stored on an external spreadsheet
var gyms = getRowsData( SpreadsheetApp.openById(ADMIN_PANEL_ID).getSheetByName(ADMIN_PANEL_SHEET_NAME))
// Iterate over gyms
gyms.forEach(getGymStats)
// Write the compiled data back to the active sheet
setRowsData(rankingSheet, gyms)
}
function getGymStats(gym)
{
var gymSpreadsheet = SpreadsheetApp.openById(gym.spreadsheetId)
// Force spreadsheet formulas to calculate before reading values
SpreadsheetApp.flush()
var metricsSheet = gymSpreadsheet.getSheetByName('Detailed Metrics')
var statsColumn = metricsSheet.getRange('E:E').getValues()
var roasColumn = metricsSheet.getRange('J:J').getValues()
// Get stats
var gymStats = {
facebookAdSpend: getFacebookAdSpend(gymSpreadsheet),
scheduling: statsColumn[8][0],
showup: statsColumn[9][0],
closing: statsColumn[10][0],
costPerLead: statsColumn[25][0],
costPerAppointment: statsColumn[26][0],
costPerShow: statsColumn[27][0],
costPerAcquisition: statsColumn[28][0],
leadCount: statsColumn[13][0],
frontEndRoas: (roasColumn[21][0] / statsColumn[5][0]) || 0,
totalRoas: (roasColumn[35][0] / statsColumn[5][0]) || 0,
totalProjectedRoas: (roasColumn[36][0] / statsColumn[5][0]) || 0,
conversionRate: (gym.currency ?
'=IFS(ISBLANK(INDIRECT("R[0]C[-4]", FALSE)),,ISBLANK(INDIRECT("R[0]C[-2]", FALSE)), 1,TRUE, IFERROR(GOOGLEFINANCE("Currency:"&INDIRECT("R[0]C[-2]", FALSE)&"USD")))' :
1)
}
Object.assign(gym, gymStats)
}
function getFacebookAdSpend(spreadsheet)
{
var range = spreadsheet.getRangeByName('FacebookAdSpend')
if (!range) return ''
return range.getValue()
}
I'm new to Java so if this has already been answered somewhere else then I either don't know enough to search for the correct things or I just couldn't understand the answers.
So the question being:
I have a bunch of objects in a list:
try(Stream<String> logs = Files.lines(Paths.get(args))) {
return logs.map(LogLine::parseLine).collect(Collectors.toList());
}
And this is how the properties are added:
LogLine line = new LogLine();
line.setUri(matcher.group("uri"));
line.setrequestDuration(matcher.group("requestDuration"));
....
How do I sort logs so that I end up with list where objects with same "uri" are displayed only once with average requestDuration.
Example:
object1.uri = 'uri1', object1.requestDuration = 20;
object2.uri = 'uri2', object2.requestDuration = 30;
object3.uri = 'uri1', object3.requestDuration = 50;
Result:
object1.uri = 'uri1', 35;
object2.uri = 'uri2', 30;
Thanks in advance!
Take a look at Collectors.groupingBy and Collectors.averagingDouble. In your case, you could use them as follows:
Map<String, Double> result = logLines.stream()
.collect(Collectors.groupingBy(
LogLine::getUri,
TreeMap::new,
Collectors.averagingDouble(LogLine::getRequestDuration)));
The Collectors.groupingBy method does what you want. It is overloaded, so that you can specify the function that returns the key to group elements by, the factory that creates the returned map (I'm using TreeMap here, because you want the entries ordered by key, in this case the URI), and a downstream collector, which collects the elements that match the key returned by the first parameter.
If you want an Integer instead of a Double value for the averages, consider using Collectors.averagingInt.
This assumes LogLine has getUri() and getRequestDuration() methods.
So I've encountered a weird issue when dealing with making Groups based on a variable when the crossfilter is using an array, instead of a literal number.
I currently have an output array of a date, then 4 values, that I then map into a composite graph. The problem is that the 4 values can fluctuate depending on the input given to the page. What I mean is that based on what it receives, I can have 3 values, or 10, and there's no way to know in advance. They're placed into an array which is then given to a crossfilter. When in testing, I was accessing using
dimension.group.reduceSum(function(d) { return d[0]; });
Where 0 was changed to whatever I needed. But I've finished testing, for the most part, and began to adapt it into a dynamic system where it can change, but there's always at least the first two. To do this I created an integer that keeps track of what index I'm at, and then increases it after the group has been created. The following code is being used:
var range = crossfilter(results);
var dLen = 0;
var curIndex = 0;
var dateDimension = range.dimension(function(d) { dLen = d.length; return d[curIndex]; });
curIndex++;
var aGroup = dateDimension.group().reduceSum(function(d) { return d[curIndex]; });
curIndex++;
var bGroup = dateDimension.group().reduceSum(function(d) { return d[curIndex]; });
curIndex++;
var otherGroups = [];
for(var h = 0; h < dLen-3; h++) {
otherGroups[h] = dateDimension.group().reduceSum(function(d) { return d[curIndex]; });
curIndex++;
}
var charts = [];
for(var x = 0; x < dLen - 3; x++) {
charts[x] = dc.barChart(dataGraph)
.group(otherGroups[x], "Extra Group " + (x+1))
.hidableStacks(true)
}
charts[charts.length] = dc.lineChart(dataGraph)
.group(aGroup, "Group A")
.hidableStacks(true)
charts[charts.length] = dc.lineChart(dataGraph)
.group(aGroup, "Group B")
.hidableStacks(true)
The issue is this:
The graph gets built empty. I checked the curIndex variable multiple times and it was always correct. I finally decided to instead check the actual group's resulting data using the .all() method.
The weird thing is that AFTER I used .all(), now the data works. Without a .all() call, the graph cannot determine the data and outputs absolutely nothing, however if I call .all() immediately after the group has been created, it populates correctly.
Each Group needs to call .all(), or only the ones that do will work. For example, when I first was debugging, I used .all() only on aGroup, and only aGroup populated into the graph. When I added it to bGroup, then both aGroup and bGroup populated. So in the current build, every group has .all() called directly after it is created.
Technically there's no issue, but I'm really confused on why this is required. I have absolutely no idea what the cause of this is, and I was wondering if there was any insight into it. When I was using literals, there was no issue, it only happens when I'm using a variable to create the groups. I tried to get output later, and when I do I received NaN for all the values. I'm not really sure why .all() is changing values into what they should be especially when it only occurs if I do it immediately after the group has been created.
Below is a screenshot of the graph. The top is when everything has a .all() call after being created, while the bottom is when the Extra Groups (the ones defined in the for loop) do not have the .all() call anymore. The data is just not there at all, I'm not really sure why. Any thoughts would be great.
http://i.stack.imgur.com/0j1ey.jpg
It looks like you may have run into the classic "generating lambdas from loops" JavaScript problem.
You are creating a whole bunch of functions that reference curIndex but unless you call those functions immediately, they will refer to the same instance of curIndex in the global environment. So if you call them after initialization, they will probably all try to use a value which is past the end.
Instead, you might create a function which generates your lambdas, like so:
function accessor(curIndex) {
return function(d) { return d[curIndex]; };
}
And then each time call .reduceSum(accessor(curIndex))
This will cause the value of curIndex to get copied each time you call the accessor function (or you can think of each generated function as having its own environment with its own curIndex).
We are interested in the statistics of the different pages combined from the Google Analytics core reporting API. The only way I found to query statistics multiple pages at the same is by creating a filter like so:
ga:pagePath==page?id=a,ga:pagePath==page?id=b,ga:pagePath==page?id=c
And this get escaped inside the filter parameter of the GET query.
However when the GET query gets over 2000 characters I get the following response:
414. That’s an error.
The requested URL /analytics/v3/data/ga... is too large to process. That’s all we know.
Note that just like in the example call the only part that is different per page is a GET parameter in the pagePath, but we have to OR a new filter specifying both the metric (pagePath) as well as the part of the path that is always identical.
Is there any way to specify a large number of different pages to query without hitting this limit in the GET query (I can't find any documentation for doing POST requests)? Or are there alternatives to creating batches of a max of X different pages per query and adding them up on my end?
Instead of using ga:pagePath as part of a filter you should use it as a dimension. You can get up to 10,000 rows per query this way and paginate to get all results. Then parse the results client side to get what you need. Additionally use a filter to scope the results down if possible based on your site structure or page names.
I am sharing a sample code where you can fetch more then 10,000 record data via help of Items PerPage
private void GetDataofPpcInfo(DateTime dtStartDate, DateTime dtEndDate, AnalyticsService gas, List<PpcReportData> lstPpcReportData, string strProfileID)
{
int intStartIndex = 1;
int intIndexCnt = 0;
int intMaxRecords = 10000;
var metrics = "ga:impressions,ga:adClicks,ga:adCost,ga:goalCompletionsAll,ga:CPC,ga:visits";
var r = gas.Data.Ga.Get("ga:" + strProfileID, dtStartDate.ToString("yyyy-MM-dd"), dtEndDate.ToString("yyyy-MM-dd"),
metrics);
r.Dimensions = "ga:campaign,ga:keyword,ga:adGroup,ga:source,ga:isMobile,ga:date";
r.MaxResults = 10000;
r.Filters = "ga:medium==cpc;ga:campaign!=(not set)";
while (true)
{
r.StartIndex = intStartIndex;
var dimensionOneData = r.Fetch();
dimensionOneData.ItemsPerPage = intMaxRecords;
if (dimensionOneData != null && dimensionOneData.Rows != null)
{
var enUS = new CultureInfo("en-US");
intIndexCnt++;
foreach (var lstFirst in dimensionOneData.Rows)
{
var objPPCReportData = new PpcReportData();
objPPCReportData.Campaign = lstFirst[dimensionOneData.ColumnHeaders.IndexOf(dimensionOneData.ColumnHeaders.FirstOrDefault(h => h.Name == "ga:campaign"))];
objPPCReportData.Keywords = lstFirst[dimensionOneData.ColumnHeaders.IndexOf(dimensionOneData.ColumnHeaders.FirstOrDefault(h => h.Name == "ga:keyword"))];
lstPpcReportData.Add(objPPCReportData);
}
intStartIndex = intIndexCnt * intMaxRecords + 1;
}
else break;
}
}
Only one thing is problamatic that your query length shouldn't exceed around 2000 odd characters
I'm trying to delete all calendar entries from today forward. I run a query then call getEntries() on the query result. getEntries() always returns 25 entries (or less if there are fewer than 25 entries on the calendar). Why aren't all the entries returned? I'm expecting about 80 entries.
As a test, I tried running the query, deleting the 25 entries returned, running the query again, deleting again, etc. This works, but there must be a better way.
Below is the Java code that only runs the query once.
CalendarQuery myQuery = new CalendarQuery(feedUrl);
DateFormat dfGoogle = new SimpleDateFormat("yyyy-MM-dd'T00:00:00'");
Date dt = Calendar.getInstance().getTime();
myQuery.setMinimumStartTime(DateTime.parseDateTime(dfGoogle.format(dt)));
// Make the end time far into the future so we delete everything
myQuery.setMaximumStartTime(DateTime.parseDateTime("2099-12-31T23:59:59"));
// Execute the query and get the response
CalendarEventFeed resultFeed = service.query(myQuery, CalendarEventFeed.class);
// !!! This returns 25 (or less if there are fewer than 25 entries on the calendar) !!!
int test = resultFeed.getEntries().size();
// Delete all the entries returned by the query
for (int j = 0; j < resultFeed.getEntries().size(); j++) {
CalendarEventEntry entry = resultFeed.getEntries().get(j);
entry.delete();
}
PS: I've looked at the Data API Developer's Guide and the Google Data API Javadoc. These sites are okay, but not great. Does anyone know of additional Google API documentation?
You can increase the number of results with myQuery.setMaxResults(). There will be a maximum maximum though, so you can make multiple queries ('paged' results) by varying myQuery.setStartIndex().
http://code.google.com/apis/gdata/javadoc/com/google/gdata/client/Query.html#setMaxResults(int)
http://code.google.com/apis/gdata/javadoc/com/google/gdata/client/Query.html#setStartIndex(int)
Based on the answers from Jim Blackler and Chris Kaminski, I enhanced my code to read the query results in pages. I also do the delete as a batch, which should be faster than doing individual deletions.
I'm providing the Java code here in case it is useful to anyone.
CalendarQuery myQuery = new CalendarQuery(feedUrl);
DateFormat dfGoogle = new SimpleDateFormat("yyyy-MM-dd'T00:00:00'");
Date dt = Calendar.getInstance().getTime();
myQuery.setMinimumStartTime(DateTime.parseDateTime(dfGoogle.format(dt)));
// Make the end time far into the future so we delete everything
myQuery.setMaximumStartTime(DateTime.parseDateTime("2099-12-31T23:59:59"));
// Set the maximum number of results to return for the query.
// Note: A GData server may choose to provide fewer results, but will never provide
// more than the requested maximum.
myQuery.setMaxResults(5000);
int startIndex = 1;
int entriesReturned;
List<CalendarEventEntry> allCalEntries = new ArrayList<CalendarEventEntry>();
CalendarEventFeed resultFeed;
// Run our query as many times as necessary to get all the
// Google calendar entries we want
while (true) {
myQuery.setStartIndex(startIndex);
// Execute the query and get the response
resultFeed = service.query(myQuery, CalendarEventFeed.class);
entriesReturned = resultFeed.getEntries().size();
if (entriesReturned == 0)
// We've hit the end of the list
break;
// Add the returned entries to our local list
allCalEntries.addAll(resultFeed.getEntries());
startIndex = startIndex + entriesReturned;
}
// Delete all the entries as a batch delete
CalendarEventFeed batchRequest = new CalendarEventFeed();
for (int i = 0; i < allCalEntries.size(); i++) {
CalendarEventEntry entry = allCalEntries.get(i);
BatchUtils.setBatchId(entry, Integer.toString(i));
BatchUtils.setBatchOperationType(entry, BatchOperationType.DELETE);
batchRequest.getEntries().add(entry);
}
// Get the batch link URL and send the batch request
Link batchLink = resultFeed.getLink(Link.Rel.FEED_BATCH, Link.Type.ATOM);
CalendarEventFeed batchResponse = service.batch(new URL(batchLink.getHref()), batchRequest);
// Ensure that all the operations were successful
boolean isSuccess = true;
StringBuffer batchFailureMsg = new StringBuffer("These entries in the batch delete failed:");
for (CalendarEventEntry entry : batchResponse.getEntries()) {
String batchId = BatchUtils.getBatchId(entry);
if (!BatchUtils.isSuccess(entry)) {
isSuccess = false;
BatchStatus status = BatchUtils.getBatchStatus(entry);
batchFailureMsg.append("\nID: " + batchId + " Reason: " + status.getReason());
}
}
if (!isSuccess) {
throw new Exception(batchFailureMsg.toString());
}
There is a small quote on the API page
http://code.google.com/apis/calendar/data/1.0/reference.html#Parameters
Note: The max-results query parameter for Calendar is set to 25 by default,
so that you won't receive an entire
calendar feed by accident. If you want
to receive the entire feed, you can
specify a very large number for
max-results.
So to get all events from a google calendar feed, we do this:
google.calendarurl.com/.../basic?max-results=999999
in the API you can also query with setMaxResults=999999
I got here while searching for a Python solution;
Should anyone be stuck in the same way, the important line is the fourth:
query = gdata.calendar.service.CalendarEventQuery(cal, visibility, projection)
query.start_min = start_date
query.start_max = end_date
query.max_results = 1000
Unfortunately, Google is going to limit the maximum number of queries you can retrieve. This is so as to keep the query governor in their guidelines (HTTP requests not allowed to take more than 30 seconds, for example). They've built their whole architecture around this, so you might as well build the logic as you have.