How to use variable list in Net-SNMP library - snmp

I'm doing some stuff with Net-SNMP library. Basically what I do is based on the sample application Simple_Application. What is not clear for me though is that part of code:
for (vars = response->variables; vars; vars = vars->next_variable) {
// process variable
}
I did a lot of testing, read this post as well and it seems to me that you mostly get a scalar value with SNMP request. So the question is: when you get more than one variable as a response?

Each request may contain a number of (scalar) variable names, and the response message will have corresponding variable bindings for each requested variable. So looping through them does make sense in that use case.
SNMP also allows the "get-next" request, which has similar semantics,
and even the "get-bulk" requests, which may return a large number of variables.
You can find examples of each request type in RFC 1905 (see sections 4.2.1 and 4.2.2 in particular).

Related

Standard practice for validating a large number of parameters

I am creating an Express.js API and am having trouble programming parameter validation eloquently. If I'm expecting 10-15 parameter variables for incoming POST requests what is the standard format for validating all of those variables?
Does anyone have any examples they could point at?
In my specific case I receive an Object (JavaScript) and I need to validate that the variable exist and are an accepted value. I do obviously know to test that it is set, and it is a string, and it is an excepted value etc. My question is really with the algorithm formatting.
Do I just have a validate function that goes through a giant if statement or what?

How can access the headers of an incoming request in tritium?

I would like to be able to add some logic to my tritium project based on the incoming request header. Is it possible to access the header information and then perform match() with() logic?
My plan is to take an existing URL (that can be accessed via a normal GET request) and give it a second mode of functionality so that it can be turned into an AJAX API. When the JavaScript makes the API request, I could set a custom header flag so that the platform knows to interpret the request differently.
You should be able to access headers in the incoming HTTP request using the global variable syntax. For example, to access the site's hostname:
$host
# => yourwebsite.com
I believe that most of the standard headers are accessible as global variables in Tritium. However, I'm not sure if all headers are accessible as global vars.
Inside your project folder, on your development machine, there should be a tmp folder that contains the HTTP request/response bundles. Each bundle should be time stamped with the request's date and time. I think if you peek inside one of these folders, you should see a bunch of files:
incoming_request
incoming_response
outgoing_request
outgoing_response
And possibly a fifth file. I can't remember if this is still the case in the current version of the platform, but there's a chance you'll find a fifth file containing the global variables that the Tritium server creates to store HTTP request header values. So you can peek inside that file (if it exists) and find out what variable name your HTTP headers are using.
Hope that helps!
I'm late on this one, but I figured I would lend a hand to anyone else who needs help on this one.
you need to create two files in your scripts directory, one called
request_main.ts
and
response_main.ts
You can then use things such as the parse_headers function, which iterates through the request/ response headers, depending on the file which you put the code in.
parse_headers() { # iterate over all the incoming/outgoing headers
log(name()) # log the name of the current cookie in the iteration
log(value()) # log the value of the current cookie in the iteration
}
parse_headers(/Set-Cookie/) { # iterate over the Set-Cookie headers only.
log(this())
}
This will log all of your header names, to make modifications, you can then use "setter" functions, which you can read about here:
http://developer.moovweb.com/docs/local/configuration/headers
Good luck.

What does a Ajax call response like 'for (;;); { json data }' mean? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why do people put code like “throw 1; <dont be evil>” and “for(;;);” in front of json responses?
I found this kind of syntax being used on Facebook for Ajax calls. I'm confused on the for (;;); part in the beginning of response. What is it used for?
This is the call and response:
GET http://0.131.channel.facebook.com/x/1476579705/51033089/false/p_1524926084=0
Response:
for (;;);{"t":"continue"}
I suspect the primary reason it's there is control. It forces you to retrieve the data via Ajax, not via JSON-P or similar (which uses script tags, and so would fail because that for loop is infinite), and thus ensures that the Same Origin Policy kicks in. This lets them control what documents can issue calls to the API — specifically, only documents that have the same origin as that API call, or ones that Facebook specifically grants access to via CORS (on browsers that support CORS). So you have to request the data via a mechanism where the browser will enforce the SOP, and you have to know about that preface and remove it before deserializing the data.
So yeah, it's about controlling (useful) access to that data.
Facebook has a ton of developers working internally on a lot of projects, and it is very common for someone to make a minor mistake; whether it be something as simple and serious as failing to escape data inserted into an HTML or SQL template or something as intricate and subtle as using eval (sometimes inefficient and arguably insecure) or JSON.parse (a compliant but not universally implemented extension) instead of a "known good" JSON decoder, it is important to figure out ways to easily enforce best practices on this developer population.
To face this challenge, Facebook has recently been going "all out" with internal projects designed to gracefully enforce these best practices, and to be honest the only explanation that truly makes sense for this specific case is just that: someone internally decided that all JSON parsing should go through a single implementation in their core library, and the best way to enforce that is for every single API response to get for(;;); automatically tacked on the front.
In so doing, a developer can't be "lazy": they will notice immediately if they use eval(), wonder what is up, and then realize their mistake and use the approved JSON API.
The other answers being provided seem to all fall into one of two categories:
misunderstanding JSONP, or
misunderstanding "JSON hijacking".
Those in the first category rely on the idea that an attacker can somehow make a request "using JSONP" to an API that doesn't support it. JSONP is a protocol that must be supported on both the server and the client: it requires the server to return something akin to myFunction({"t":"continue"}) such that the result is passed to a local function. You can't just "use JSONP" by accident.
Those in the second category are citing a very real vulnerability that has been described allowing a cross-site request forgery via tags to APIs that do not use JSONP (such as this one), allowing a form of "JSON hijacking". This is done by changing the Array/Object constructor, which allows one to access the information being returned from the server without a wrapping function.
However, that is simply not possible in this case: the reason it works at all is that a bare array (one possible result of many JSON APIs, such as the famous Gmail example) is a valid expression statement, which is not true of a bare object.
In fact, the syntax for objects defined by JSON (which includes quotation marks around the field names, as seen in this example) conflicts with the syntax for blocks, and therefore cannot be used at the top-level of a script.
js> {"t":"continue"}
typein:2: SyntaxError: invalid label:
typein:2: {"t":"continue"}
typein:2: ....^
For this example to be exploitable by way of Object() constructor remapping, it would require the API to have instead returned the object inside of a set of parentheses, making it valid JavaScript (but then not valid JSON).
js> ({"t":"continue"})
[object Object]
Now, it could be that this for(;;); prefix trick is only "accidentally" showing up in this example, and is in fact being returned by other internal Facebook APIs that are returning arrays; but in this case that should really be noted, as that would then be the "real" cause for why for(;;); is appearing in this specific snippet.
Well the for(;;); is an infinite loop (you can use Chrome's JavaScript console to run that code in a tab if you want, and then watch the CPU-usage in the task manager go through the roof until the browser kills the tab).
So I suspect that maybe it is being put there to frustrate anyone attempting to parse the response using eval or any other technique that executes the returned data.
To explain further, it used to be fairly commonplace to parse a bit of JSON-formatted data using JavaScript's eval() function, by doing something like:
var parsedJson = eval('(' + jsonString + ')');
...this is considered unsafe, however, as if for some reason your JSON-formatted data contains executable JavaScript code instead of (or in addition to) JSON-formatted data then that code will be executed by the eval(). This means that if you are talking with an untrusted server, or if someone compromises a trusted server, then they can run arbitrary code on your page.
Because of this, using things like eval() to parse JSON-formatted data is generally frowned upon, and the for(;;); statement in the Facebook JSON will prevent people from parsing the data that way. Anyone that tries will get an infinite loop. So essentially, it's like Facebook is trying to enforce that people work with its API in a way that doesn't leave them vulnerable to future exploits that try to hijack the Facebook API to use as a vector.
I'm a bit late and T.J. has basically solved the mystery, but I thought I'd share a great paper on this particular topic that has good examples and provides deeper insight into this mechanism.
These infinite loops are a countermeasure against "Javascript hijacking", a type of attack that gained public attention with an attack on Gmail that was published by Jeremiah Grossman.
The idea is as simple as beautiful: A lot of users tend to be logged in permanently in Gmail or Facebook. So what you do is you set up a site and in your malicious site's Javascript you override the object or array constructor:
function Object() {
//Make an Ajax request to your malicious site exposing the object data
}
then you include a <script> tag in that site such as
<script src="http://www.example.com/object.json"></script>
And finally you can read all about the JSON objects in your malicious server's logs.
As promised, the link to the paper.
This looks like a hack to prevent a CSRF attack. There are browser-specific ways to hook into object creation, so a malicious website could use do that first, and then have the following:
<script src="http://0.131.channel.facebook.com/x/1476579705/51033089/false/p_1524926084=0" />
If there weren't an infinite loop before the JSON, an object would be created, since JSON can be eval()ed as javascript, and the hooks would detect it and sniff the object members.
Now if you visit that site from a browser, while logged into Facebook, it can get at your data as if it were you, and then send it back to its own server via e.g., an AJAX or javascript post.

GET vs POST in AJAX?

Why are there GET and POST requests in AJAX as it does not affect page URL anyway? What difference does it make by passing sensitive data over GET in AJAX as the data is not getting reflected to page URL?
You should use the proper HTTP verb according to what you require from your web service.
When dealing with a Collection URI like: http://example.com/resources/
GET: List the members of the collection, complete with their member URIs for further navigation. For example, list all the cars for sale.
PUT: Meaning defined as "replace the entire collection with another collection".
POST: Create a new entry in the collection where the ID is assigned automatically by the collection. The ID created is usually included as part of the data returned by this operation.
DELETE: Meaning defined as "delete the entire collection".
When dealing with a Member URI like: http://example.com/resources/7HOU57Y
GET: Retrieve a representation of the addressed member of the collection expressed in an appropriate MIME type.
PUT: Update the addressed member of the collection or create it with the specified ID.
POST: Treats the addressed member as a collection in its own right and creates a new subordinate of it.
DELETE: Delete the addressed member of the collection.
Source: Wikipedia
Well, as for GET, you still have the url length limitation. Other than that, it is quite conceivable that the server treats POST and GET requests differently; thus the need to be able to specify what request you're doing.
Another difference between GET and POST is the way caching is handled in browsers. POST response is never cached. GET may or may not be cached based on the caching rules specified in your response headers.
Two primary reasons for having them:
GET requests have some pretty restrictive limitations on size; POST are typically capable of containing much more information.
The backend may be expecting GET or POST, depending on how it's designed. We need the flexibility of doing a GET if the backend expects one, or a POST if that's what it's expecting.
It's simply down to respecting the rules of the http protocol.
Get - calls must be idempotent. This means that if you call it multiple times you will get the same result. It is not intended to change the underlying data. You might use this for a search box etc.
Post - calls are NOT idempotent. It is allowed to make a change to the underlying data, so might be used in a create method. If you call it multiple times you will create multiple entries.
You normally send parameters to the AJAX script, it returns data based on these parameters. It works just like a form that has method="get" or method="post". When using the GET method, the parameters are passed in the query string. When using POST method, the parameters are sent in the post body.
Generally, if your parameters have very few characters and do not contain sensitive information then you send them via GET method. Sensitive data (e.g. password) or long text (e.g. an 8000 character long bio of a person) are better sent via POST method.
Thanks..
I mainly use the GET method with Ajax and I haven't got any problems until now except the following:
Internet Explorer (unlike Firefox and Google Chrome) cache GET calling if using the same GET values.
So, using some interval with Ajax GET can show the same results unless you change URL with irrelevant random number usage for each Ajax GET.
Others have covered the main points (context/idempotency, and size), but i'll add another: encryption. If you are using SSL and want to encrypt your input args, you need to use POST.
When we use the GET method in Ajax, only the content of the value of the field is sent, not the format in which the content is. For example, content in the text area is just added in the URL in case of the GET method (without a new line character). That is not the case in the POST method.

Using the HTTP Range Header with a range specifier other than bytes?

The core question is about the use of the HTTP Headers, including Range, If-Range, Accept-Ranges and a user defined range specifier.
Here is a manufactured example to help illustrate my question. Assume I have a Web 2.0 style application that displays some sort of human readable documents. These documents are editorially broken up into pages (similar to articles you see on news websites). For this example, assume:
There is a document titled "HTTP Range Question" is broken up into three pages.
The shell page (/document/shell/http-range-question) knows the meta information about the document, including the number of pages.
The first readable page of the document is loaded during the page onload event via an ajax GET and inserted onto the page.
A UI control that looks like [ 1 2 3 All ] is at the bottom of the page, and clicking on a number will display that readable page (also loaded via ajax), and clicking "All" will display the entire document. Assume these URLS for the 1, 2, 3 and All use cases:
/document/content/http-range-question?page=1
/document/content/http-range-question?page=2
/document/content/http-range-question?page=3
/document/content/http-range-question
Now to the question. Can I use the HTTP Range headers instead part of the URL (e.g. a querystring parameter)? Maybe something like this on the GET /document/content/http-range-question request:
Range: page=1
It looks like the spec only defines byte ranges as allowable, so even if I made my ajax calls work with my browser and server code, anything in the middle could break the contract (e.g. a caching proxy server).
Range: bytes=0-499
Any opinions or real world examples of custom range specifiers?
Update: I did find a similar question about the Range header (Paging in a Rest Collection) where they mention that Dojo's JsonRestStore uses a custom Range header value.
Range: items=0-24
Absolutely - you are free to specify any range units you like.
From RFC 2616:
3.12 Range Units
HTTP/1.1 allows a client to request
that only part (a range of) the
response entity be included within the
response. HTTP/1.1 uses range units
in the Range (section 14.35) and
Content-Range (section 14.16)
header fields. An entity can be broken
down into subranges according to
various structural units.
range-unit = bytes-unit | other-range-unit
bytes-unit = "bytes"
other-range-unit = token
The only range unit defined by
HTTP/1.1 is "bytes". HTTP/1.1
implementations MAY ignore ranges
specified using other units.
The key piece is the last paragraph. Really what it's saying is that when they wrote the spec for HTTP/1.1, they only outlined the "bytes" token. But, as you can see from the 'other-range-unit' bit, you are free to come up with your own token specifiers.
Coming up with your own Range specifiers does mean that you have to have control over the client and server code that uses that specifier. So, if you own the backend piece that exposes the "/document/content/http-range-question" URI, you are good to go; presumably you're using a modern web framework that lets you inspect the request headers coming in. You could then look at the Range values to perform the backing query correctly.
Furthermore, if you control the AJAX code that makes requests to the backend, you should be able to set the Range header yourself.
However, there is a potential downside which you anticipate in your question: the potential to break caching. If you are using a custom Range unit, any caches between your client and the origin servers "MAY ignore ranges specified using [units other than 'bytes']". So for example, if you had a Squid/Varnish cache between the front and backend, there's no guarantee that the results you're hoping for will be served from the cache!
You might also consider an alternative implementation where, rather than using a query string, you make the page a "parameter" of the URI; e.g.: /document/content/http-range-question/page/1. This would likely be a little more work for you server-side, but it's HTTP/1.1 compliant and caches should treat it properly.
Hope this helps.
bytes is the only unit supported by HTTP 1.1 Specification.
HTTP Range is typically used for recovering interrupted downloads without starting from the beginning.
What you're trying to do would be better handled by OAI-ORE, which allows you to define relationships between multiple documents. (alternative formats, components of the whole, etc)
Unfortunately, it's a relatively new metadata format, and I don't know of any web browsers that ship with native support.
It sounds like you want to change the HTTP spec just to remove a querystring parameter. In order to do this you'd have to modify code on both the client to send the modified header and the server to read from the "Range" header instead of the querystring.
The end result is that this will probably work, but you're breaking all of the standards and existing tools to do so.

Resources