Can send SNMP requests from manager to multiple agents concurrently? - snmp

I require to send get requests to several snmp agents from a client process.
I have implemented client/agent based on below urls
http://www.jitendrazaa.com/blog/java/snmp/create-snmp-client-in-java-using-snmp4j/
http://www.jitendrazaa.com/blog/java/snmp/creating-snmp-agent-server-in-java-using-snmp4j/
I would like to know whether the client/manager can send requests to the agents concurrently? (e.g. using background threads within the process)
or whether it would be necessary to poll each agent individually?
From the samples,
CommunityTarget has address set as udp:127.0.0.1/161 - which is then used in the snmp 'get' request.
The agent has address set as 0.0.0.0/2001 - which is used when creating TransportMappings.
I don't understand how the addressing is working / how I would configure to handle agents at other/non local IP addresses?
Thank you

For question #1:
You are asking about synchronous vs. asynchronous API usage.
Google "snmp4j asynchronous" for examples.
This is orthogonal to single-thread vs. multi-thread.
Ie. you can have a single-thread application which asynchronously sends requests and handles multiple agents' responses.
As an example, the MIMIC Recorder is a single-threaded, asynchronous app.
Multi-thread is only needed for complicated applications which handle complicated management state machines.
For question #2:
It looks like the sample code only connects to the agent 127.0.0.1 on the local system. To connect remotely, you would have to use the IP address where the agent is running.

Related

Read_reply in tarantool-c is too slow

I am setting up a c server and used tarantool as databased using tarantool-c. However, everytime I setup read_reply() the request per second tanks so much its like using mysql. How to fix it?
We had the discussion with James and he shares the code. The code implements the http server and this is how it processes a request:
Accept a new incoming http request.
Send a request to tarantool (using binary protocol).
Wait for a reply from tarantool (synchronously, unable to handle other incoming http requests).
Answer to the http request.
The root of the problem here is that we unable to utilize full network bandwith between the http server and tarantool. Such server should use select() / poll() / epoll() (on Linux) / kqueue (on FreeBSD) or a library like libev to determine whether it is able to write to a socket, read from it or accept a request.
Let me describe in brief how it should operate to at least utilize a network as much as possible (when doing it from one thread) in the set of rules of when-X-then-Y kind:
When a new http request arrive it should register the need to send a request to tarantool (let me name it pending request).
When a socket to tarantool is ready for write and there is at least one pending request the server should form a request to tarantool, save its ID (see tnt_stream.reqid) and write the request to the socket.
When the socket to tarantool is ready to read the server should read a reply from tarantool, match its ID (see tnt_reply.sync) against saved one and write a response to an http client, then close the socket to the client.
If http keepalive / pipelining need to be supported, the server needs to check a socket to an http client for readiness to perform read() / write(). Also it need to register pending http responses rather then write it just when they appears.
Aside of that HTTP itself is not easy to implement in a proper way: it always give you surprises if you don't control implementations of both client and server.
So I will describe alternatives to implementing its own http server, that are compatible with tarantool and able to operate with it in asynchronous way to achieve good performance. They are:
Using http.server tarantool module that allows to process http requests right in tarantool w/o external service and w/o using a connector to tarantool.
Using nginx_upstream_tarantool nginx module that allows to perform a request to tarantool from nginx using the binary protocol.
There are cons and pros for both of these ways. However http.server disadvantages can be overcomed with using nginx as frontend to proxying requests to tarantool(s) saving http.server advantages.
http.server cons:
No https support.
Single CPU bound / single tarantool instance bound.
Possibly worse performance then nginx (but not much).
Possibly worse support of HTTP caveats (but I don't know one).
http.server pros:
Simpler to start developing.
Simpler in configuration / deployment: parts of an application configuration do not spread across configs for tarantool and nginx.
nginx_upstream_tarantool cons and pros are reverse of http.server ones. Also I would mention specifically that nginx allows you to balance load across several tarantool instances, which may form a replication group or may be sharding frontends. This ability can be used to scale a service in the sense of desired performance as with proxying to http.server as well as with nginx_upstream_tarantool.
I guess you also interested in benchmarking results for http.server and nginx_upstream_tarantool. Look at this measurement. Note however that it is quite synthetic: it performs small requests and answer with small responses. Real RPS numbers may be different depending of size of requests and responses, hardware, whether https is needed, etc.

Why should we use IP spoofing when performance testing?

Could anyone please tell me what is the use of IP spoofing in terms of Performance Testing?
There are two main reasons for using IP spoofing while load testing a web application:
Routing stickiness (a.k.a Persistence) - Many load balancers use IP stickiness when distriuting incoming load across applications servers. So, if you generate the load from the same IP, you could load only one application server instead of distributing the load to all application servers (This is also called Persistence: When we use Application layer information to stick a client to a single server). Using IP spoofing, you avoid this stickiness and make sure your load is distributed across all application servers.
IP Blocking - Some web applications detect a mass of HTTP requests coming from the same IP and block them to defend themselves. When you use IP spoofing you avoid being detected as a harmful source.
When it comes to load testing of web applications well behaved test should represent real user using real browser as close as possible, with all its stuff like:
Cookies
Headers
Cache
Handling of "embedded resources" (images, scripts, styles, fonts, etc.)
Think times
You might need to simulate requests originating from the different IP addresses if your application (or its infrastructure, like load balancer) assumes that each user uses unique IP address. Also DNS Caching on operating system of JVM level may lead to the situation when all your requests are basically hitting only one endpoint while others remain idle. So if there is a possibility it is better to mimic the requests in that way so they would come from the different addresses.

Integration of Shenzhen Concox Information Technology Tracker GT06 with EC2

I have a concox GT06 device from which I want to send tracking data to my AWS Server.
The coding protocol manual that comes with it only explains the data structure and protocol.
How does my server receive the GPS data collected by my tracker?
Verify if your server allows you to open sockets, which most low cost solutions do NOT allow for security reasons (i recommend using an Amazon EC2 virtual machine as your platform).
Choose a port on which your application will listen to incoming data, verify if it is open (if not open it) and code your application (i use C++) to listen to that port.
Compile and run your application on the server (and make sure that it stays alive).
Configure your tracker (usually by sending an sms to it) to send data to your server's IP and to the port which your application is listening to.
If you are, as i suspect you are, just beginning, consider that you will invest 2 to 3 weeks to develop this solution from scratch. You might also consider looking for a predeveloped tracking platform, which may or may not be acceptable in terms of data security.
You can find examples and tutorials online. I am usually very open with my coding and would gladly send a copy of the socket server, but, in this case, for security reasons, i cannot do so.
Instead of direct parsing of TCP or UDP packets you may use simplified solution putting in-between middleware backends specialized in data parsing e.g. flespi.
In such approach you may use HTTP REST API to fetch each new portion of data from trackers sent to you dedicated IP:port (called channel) or even send standardized commands with HTTP REST to connected devices.
At the same time it is possible to open MQTT connection using standard libraries and receive converted into JSON messages from devices as MQTT in real time, which is even better then REST due to almost zero latency.
If you are using python you may take a look at open-source flespi_receiver library. In this approach with 10 lines of code you may have on your EC2 whole parsed into JSON messages from Concox GT06.

When would you need multiple servers to host one web application?

Is that called "clustering" of servers? When a web request is sent, does it go through the main server, and if the main server can't handle the extra load, then it forwards it to the secondary servers that can handle the load? Also, is one "server" that's up and running the application called an "instance"?
[...] Is that called "clustering" of servers?
Clustering is indeed using transparently multiple nodes that are seen as a unique entity: the cluster. Clustering allows you to scale: you can spread your load on all the nodes and, if you need more power, you can add more nodes (short version). Clustering allows you to be fault tolerant: if one node (physical or logical) goes down, other nodes can still process requests and your service remains available (short version).
When a web request is sent, does it go through the main server, and if the main server can't handle the extra load, then it forwards it to the secondary servers that can handle the load?
In general, this is the job of a dedicated component called a "load balancer" (hardware, software) that can use many algorithms to balance the request: round-robin, FIFO, LIFO, load based...
In the case of EC2, you previously had to load balance with round-robin DNS and/or HA Proxy. See Introduction to Software Load Balancing with Amazon EC2. But for some time now, Amazon has launched load balancing and auto-scaling (beta) as part of their EC2 offerings. See Elastic Load Balancing.
Also, is one "server" that's up and running the application called an "instance"?
Actually, an instance can be many things (depending of who's speaking): a machine, a virtual machine, a server (software) up and running, etc.
In the case of EC2, you might want to read Amazon EC2 Instance Types.
Here is a real example:
This specific configuration is hosted at RackSpace in their Managed Colo group.
Requests pass through a Cisco Firewall. They are then routed across a Gigabit LAN to a Cisco CSS 11501 Content Services Switch (eg Load Balancer). The Load Balancer matches the incoming content to a content rule, handles the SSL decryption if necessary, and then forwards the traffic to one of several back-end web servers.
Each 5 seconds, the load balancer requests a URL on each webserver. If the webserver fails (two times in a row, IIRC) to respond with the correct value, that server is not sent any traffic until the URL starts responding correctly.
Further behind the webservers is a MySQL master / slave configuration. Connections may be mad to the master (for transactions) or to the slaves for read only requests.
Memcached is installed on each of the webservers, with 1 GB of ram dedicated to caching. Each web application may utilize the cluster of memcache servers to cache all kinds of content.
Deployment is handled using rsync to sync specific directories on a management server out to each webserver. Apache restarts, etc.. are handled through similar scripting over ssh from the management server.
The amount of traffic that can be handled through this configuration is significant. The advantages of easy scaling and easy maintenance are great as well.
For clustering, any web request would be handled by a load balancer, which being updated as to the current loads of the server forming the cluster, sends the request to the least burdened server. As for if it's an instance.....I believe so but I'd wait for confirmation first on that.
You'd' need a very large application to be bothered with thinking about clustering and the "fun" that comes with it software and hardware wise, though. Unless you're looking to start or are already running something big, it wouldn't' be anything to worry about.
Yes, it can be required for clustering. Typically as the load goes up you might find yourself with a frontend server that does url rewriting, https if required and caching with squid say. The requests get passed on to multiple backend servers - probably using cookies to associate a session with a particular backend if necessary. You might have the database on a separate server also.
I should add that there are other reasons why you might need multiple servers, for instance there may be a requirement that the database is not on the frontend server for security reasons

Detecting dead applications while server is alive in NLB

Windows NLB works great and removes computer from the cluster when the computer is dead.
But what happens if the application dies but the server still works fine? How have you solved this issue?
Thanks
By not using NLB.
Hardware load balancers often have configurable "probe" functions to determine if a server is responding to requests. This can be by accessing the real application port/URL, or some specific "healthcheck" URL that returns only if the application is healthy.
Other options on these look at the queue/time taken to respond to requests
Cisco put it like this:
The Cisco CSM continually monitors server and application availability
using a variety of probes, in-band
health monitoring, return code
checking, and the Dynamic Feedback
Protocol (DFP). When a real server or
gateway failure occurs, the Cisco CSM
redirects traffic to a different
location. Servers are added and
removed without disrupting
service—systems easily are scaled up
or down.
(from here: http://www.cisco.com/en/US/products/hw/modules/ps2706/products_data_sheet09186a00800887f3.html#wp1002630)
Presumably with Windows NLB there is some way to programmatically set the weight of nodes? The nodes should self-monitor and if there is some problem (e.g. a particular node is low on disc space), set its weight to zero so it receives no further traffic.
However, this needs to be carefully engineered and have further human monitoring to ensure that you don't end up with a situation where one fault causes the entire cluster to announce itself down.
You can't really hope to deal with a "byzantine general" situation in network load balancing; an appropriately broken node may think it's fine, appear fine, but while being completely unable to do any actual work. The trick is to try to minimise the possibility of these situations happening in production.
There are multiple levels of health check for a network application.
is the server machine up?
is the application (service) running?
is the service accepting network connections?
does the service respond appropriately to a "are you ok" request?
does the service perform real work? (this will also check back-end systems behind the service your are probing)
My experience with NLB may be incomplete, but I'll describe what I know. NLB can do 1 and 2. With custom coding you can add the other levels with varying difficulty. With some network architectures this can be very difficult.
Most hardware load balancers from vendors like Cisco or F5 can be easily configured to do 3 or 4. Level 5 testing still requires custom coding.
We start in the situation where all nodes are part of the cluster but inactive.
We run a custom service monitor which makes a request on the service locally via the external interface. If the response was successful we start the node (allow it to start handling NLB traffic). If the response failed we stop the node from receiving traffic.
All the intermediate steps described by Darron are irrelevant. Did it work or not is the only thing we care about. If the machine is inaccessible then the rest of the NLB cluster will treat it as failed.

Resources