Performance testing on host intrusion prevention system - performance

I need some help here. In my current project we are replacing a Host Intrusion Prevention Software (HIPS) and I need to do performance testing on its server. Wondering if you can throw some light on what type of performance testing can be done on the server and also what kind of comparison test I can do wrt perf test for already existing HIPS server?

It depends on what you are trying to achieve, actually. Are you guys developing a Host Intrusion Prevention Software or just replacing an existing one for another from other vendor or version?
As your topic says you want to do performance testing, I will ignore all the functional testing and security testing you could do.
One thing I think you can measure is the resource consumption by the HIPS, you could compare with the previous version to know if it will need a more robust host than you actually have or if it will have some performance impact over the other applications running in the same host.

Related

Jmeter Mobile Native App Testing

I have Two Question related to Native App Performance Testing?
1)I have a Payment App, and it comes with bank security which is installed at the time of app installation. It sends an token number and rest of the data in encrypted format. Is it possible to handle such kind of request using Jmeter or any other performance testing tool, do i need to change some setting in app server or jmeter to get this done ?
2)Mobile App uses Device ID, so if i simulated load on cloud server it will use same Device ID which i used while creating script? is it possible to simulate different mobile ID to make it real-time?
any Help or references will be appreciated ..:)
(1) Yes. This is why performance testing tools are built around general purpose programming languages, to allow you (as the tester) to leverage your foundation skills in programming to leverage the appropriate algorithms and libraries to represent the same behavior as the client
(2) This is why performance testing tools allow for parameterization of the sending datastream to the server/application under test
I'm not an expert in JMeter. But work a lot with Loadrunner (LR) (Performance Testing Tool from HP). Though JMeter and LR are different tools, they work under same principle and objective and so objective of performance testing.
As James Pulley mentioned, the performance testing tool may have the capability. But the question is,
Have your tried recording your app with JMeter? Since your app is a native kind, please do the recording from simulator/emulator and check the feasibility. JMeter might not be the right candidate for mobile app load testing.
Alternatively there are lot of other tools available (both commercial and opensource) in market for your objective.
Best Regards
With the raise of several mobile network technologies, load testing a mobile application has become a different ball game in comparison with normal web app load testing. This is because of the differences in the response times that occur in different mobile networks such as 2G, 3G, 4G, etc. Additionally the client being a mobile device has plenty of physical constraints such as limited CPU, RAM, internal storage etc. All of these need to be considered while conducting performance testing of a mobile application if one wants to simulate a scenario close to a real time condition.
Coming to your 2 questions,
1) Yes it is possible but the amount of manual effort that needs to be invested to make the script execution ready might vary (since you are mentioning there is data in encrypted format - some are easy to understand and some are just crude and difficult to handle using JMeter). But there might not be any app server setting that would be required to change (unless of course you are unable to handle the encryption with JMeter in which case, the encryption might have to be disabled for QA phase)
2) As rightly said by James Pulley, these values can be parameterized. However, I fear that these values will be validated by the app server and hence the values need to be appropriately fed in the requests.
You can refer to this link for reference on how to do Mobile Performance Testing for Native application http://www.neotys.com/documents/doc/neoload/latest/en/html/#4234.htm#o4237
.The same could be extrapolated to JMeter to an extent.

How to start performance testing

I have taken as an example for learning and gathered some information about tools, objectives,scenarios, but I need your inputs. Please assist me.
I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
Can you tell me, what are need to be tested? What are the scenarios and activities for each scenario? What metrics do I need to add? Which is the best and free tool for testing it? How to test if it is deployed in cloud like AWS?
Please let me know, Thanks in advance.
Performance testing needs,
Identify critical/heavy/important scenario in your webapp (irrespective of deployment cloud/standalone)
Identify service level agreements in terms of response times, throughput, latency etc.
Identify workload model i.e. how much user load application is expecting. this should be as fine grained as possible (avg users per transaction/workflow at a point of time)
Identify tools (JMeter is freeware and best but if you can afford paid then look at loadrunner, neoload etc.)
Record the script for workflows and parameterise and correlate.
generate test setup for load test and execute the load test.
monitor system utilization, collect metrics like response time, throughput, error rate, latency etc.
This all comes in load testing. For more you can read http://www.guru99.com/performance-testing.html
I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
That is a recipe for disaster. No one new should be allowed to work on their own without a full period of training and internship with a master in the field. This is true of stone masons, electricians, plumbers, barbers, accountants, engineers and physicians. And it is most certainly true of performance testers/engineers.
There are dozens of foundation skills you need to master before you touch any tool, open source or otherwise. Until you show mastery of those items along with tool mechanics for your tool you should not be allowed to test any website, particularly a production website. And, if you don't work for this company what you are engaging in is a denial of service attack and could leave you with exposed legal liability.
I strongly agree with James on this one.
Do not touched the site if:
it's not yours
not sure what you are doing
the owner gave you explicit (and sounds like irresponsible) permission
don't know or don't have the support to restore the environment into a working state
If you do work for the company then you need to have a test environment first, a playground where you can mess around and nobody would mind if you take it down.
Firstly get information from the business on which use cases needs to
be tested.
Get response times target for user actions and for environments utilisation.
Get response time targets for environments utilisation: define environment monitoring tactics.
Found a tool that can fit for purpose: Jmeter, Gattling,etc, lot's of free ones available.
Get a test environment, preferably similar scale to production
Create scripts to cover critical use cases
Comply scripts into scenarios
Create a reporting framework
Kick off monitoring
Kick off scenario
Collect and analyse results
Be mindful of the free editions of load testing tools: they tend to be easy to use at first but soon as you start to outgrow it it can cost a fortune and more often then not it's hard to port scripts/scenarios to another tool.

Network Settings for JMeter Traffic | Internet or LAN?

I'm going to perform a load and stress test on a webpage using Apache JMeter, but I'm not very sure about the appropriate network setting. Is it better to connect the two machines, the server with the webpage and the client running JMeter via local network or via the internet. Using the internet would be closer to the real scenario, but with a local network the connection is much more stable and you have more bandwidth for more requests and the same time.
I'm very thankful for opinions!
These are in fact two styles or approaches to load testing, both are valid.
The first you might call Lab testing. Where you minimise the number of factors that can affect throughput/resp. times and really focus the test on the system itself.
The second is the more realistic scenario where you are trying to get as much coverage as possible by routing requests through as many of the actual network layers that will exist when the system goes live.
The benefit of method 1 is that you simplify the test which makes understanding and finding any problems much easier. The problem is you lack complete coverage.
The benefit of method 2 is that it is not only more realistic but it also gives a higher level of confidence - esp. with higher volume tests, you might find you have a problem with a Switch or Firewall and it is only with this type of testing that you identify such issues. The problem is it can make finding any issues harder.
So, in short, you really want to do both types. You might find it easier to start with the full end to end test from outside in, and then only move to a more focused test if you find that you need to isolate / investigate a problem. That way you stand a chance of reducing the amount of setup work whilst still getting the maximum benefit from testing.
Note: Outside in means just that, your test rig should be located outside of the LAN (assuming this is how live traffic will flow). These days this is easy to setup using cloud based hardware.
Also note: If the machine that you are running the tests from is the same in both cases then routing the traffic via the internet (out of your LAN and then back in again) is probably not going to tell you anything useful and could actually cause a false negative in your results (not to mention network problems for your company!)
IMHO you should use your LAN.
Practically every user will have slightly different dl/ul speed, so I suggest you first do a normal performance test using your LAN and when you finish, you can do a few runs from outside, just to see the difference.
Remember, you're primarily testing efficiency of your application on the hardware it sits on. Network speed (of your future users) is the factor you cannot influence in any way.

Running my own server with a "developers background"? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a couple of different projects running for the moment - some PHP apps and a few WordPress instances, which all currently are kept at a web hosting company. The contract period time is about to end and I would lie if I wouldn't say that I really had considered making the switch onto a VPS server in the cloud with the prices getting really great.
I am totally in love with the fact of being able to turn the performance up or down when demand increases, or goes away and thereby cut the costs.
With my background as a PHP developer, with only a little hint of Linux (ubuntu) knowledge, I am thoroughly concerned about the security if I should run my own VPS.
Sure, I am able to install and get things running with my current knowledge (and some help by Google), but is it realistic nowadays to expect that my server (LAMP, really) will stay secure by running out-of the box stuff and keeping it up-to date?
Thanks
Maintaining your server is just one more thing to worry about, and if you're a developer, your focus should probably be on development. That said, it needs to make financial sense to go the managed route. If you're just working on toy projects (I've got a $20/month VPS that I use for my personal projects and homepage, and it's pretty hands-off) or if you're just getting off the ground, VPSes have the great advantage of being cheap and giving you lots of control of your environment. You can even mitigate some of the risk by keeping aggressive backups, since it's easy to redeploy a server quickly.
But, if you get to the point where it won't affect your profitability to do so, you probably should seriously consider getting someone else to take care of infrastructure for you either by buying managed hosting services or hiring someone to do it for you. It all depends on what you can afford to lose if you get rooted and how much time you can afford to invest in server management and recovery as opposed to coding.
I wouldn't. We did the same thing because the non-managed VPS are sooo cheap, but unless you really need to install applications or libraries that are not part of standard shared host setups, in my experience, being a pure developer as well, the time spent is never worth it.
Unless, of course, it is your own tiny blog or you just want to play around.
But imagine you (or whichever automation you use) update php, and for some reasons it fails (or worse, you render your current installation unusable) - are you good enough to handle this? And if so, how long will it take you? Do you have a friend at hand who can help?
We, as a small company, are getting rid of our VPSs step-by-step and moving back to our reseller package, hosted at a good hosting provider.
Good question, though.
As for security, I have successfully used Amazon EC2 for a number of things. It's not the cheapest around, but quite comprehensible in shared data stores between instances, connection to S3, running hosts at different hosting centers etc, grouping hosts in different clusters, etc etc.
They have a firewall built in, where you can turn all things off except say, TCP traffic on port 22 for SSH and 80 for web. That combined with something like Ubuntu, where you can easily run updates without worrying much about breakage, is probably all you need from a security point of view.
You need consider cloud computing as a statement of avaibility, not cost. You can be seriously surprised about the cost at the end.
I already have optioned to use VPS hosting. Good VPS hosting is costly, these days you may find cheap dedicated host compared to VPS. Have look at hivelocity.com – I like their services.
About security, most VPS host company takes care of security for you at the infra-structure level, and some may use antivirus software on files. On dedicated host, you need to take care by yourself or contract managed support services: a tradoff.
LAMP server is cheap everywhere. You can hire a private VPS and have some security, you may count on services like DNS hosting too – this is trouble to configure. VPS can be your first step as you're doubtful and has no experience on hosting. Thereafter when you find out the advantages of having your own server, you'll migrate straight to dedicated server.
What is acceptable from a security standpoint will differ depending on the people involved, what you want to secure and requirements of the product/service.
For a development server I usually don't care so much, so I usually do some basic securing of the server and then don't pay attention to it again. My main concern is more of someone getting a session and using my cycles to run something. I don't normally care about IP so that's not a concern for me.
If I'm setting up a box that has to meet Sarbanes-Oxley, Safe Harbor, or other PII/PCI standards I must meet I would probably go managed just because I don't want the additional security work load.
Somewhere in between is a judgment based on if I want to commit the required time to secure the server to the level I want it secured at. If I don't want to do it myself I pay someone to do it.
I would be careful about assuming your getting a certain level of security just because your paying someone to manage your server. I've come across plenty of shops where security is really an afterthought.
If I understood you correctly, you are considering a move from a web host to a VPS, and wonder if you have the skills to ensure the OS remains secure now that it's under your control?
I guess it's an open-ended question. You are moving from a managed environment to an unmanaged environment, and whether you maintain your environmental security is up to you. If you're running your own server then you need to make sure that default passwords aren't in use (for the database, OS and any services on top), patches are quickly identified and applied, host firewalls are configured properly and suspicious activity alerts are immediately sent to you. Hang on, does your current web host do any of this for you? Without details about your current web host and the planned VPS, you are pretty much comparing apples to oranges.
BTW, I would be somewhat concerned about my LAMP server security, but frankly I would be much more concerned about development errors (SQL injection, XSS) and the packages running on top of my server (default passwords + dev errors).
For a lamp stack, I would probably not do it. It would be a different case if you were using a Platform-as-a-service provider like Windows Azure - by my own experience there is minimal operational overhead and you just upload the app and it runs in a vm (and yes it supports php).
But for Linux there are no such providers that I know of, which means you will have to manage the Operating system, the app frameworks, the web server and anything else that you install on the instance. I wouldn't do it myself. I would consider the options as hiring a person with the relevant experience to do this for me vs the cost of managed services from the vps provider and go with one of those two.
Rather than give you advice about what you should do, or tell you what I would do, I'm just going to address your question "is it realistic nowadays to expect that my server (LAMP, really) will stay secure by running out-of the box stuff and keeping it up-to date?" The answer to this question, in my opinion, is basically yes.
dietbuddha is right, of course: what constitutes an acceptable level of security depends on the context, but for all but the most security-sensitive purposes, if you're using a current (i.e. supported) distro, with sane defaults, and keeping up with the security updates, then you ought to be fine.
I have two VPSs, each of them currently runs Ubuntu 10.04 server. On one of them, I spend some time installing and configuring tiger, tripwire, and taking various other security measures. On the other, I simply installed fail2ban and set security updates to automatic, and left it at that. They've been running for a few years, now, and I've had no problem with either.
You should do it for fun and for learning purposes. Other than that, don't; you're wasting your own time and a lot of other people's time.
I say this because I've wasted serious time setting up an EC2 instance to host my SVN server and a few other things. I mean, I loved setting everything up and messing w/ the server; I learned a lot especially because I'd never done anything a LINUX server before. However, looking back, I wasted a ton of time and had to keep buggin #Jordan S. Jones for help.

High traffic web sites

What makes a site good for high traffic?
Does it have more to do with the hardware/infrastructure, or with how one writes the software, using Java as the example, if it matters?
I'm wondering how the software changes just because it is expected that billions of users will be on the site, if at all.
My understanding up to this point is that the code doesn't change, but that it is deployed on multiple servers, in a cluster, and a load balancer distributes the load, so really, on any one server/deployment, the application is just as any other standard application/website.
I highly recommend reading Jeff Atwood's blog on Micro-Optimization. In previous blogs he talks somewhat about how this site was created and the hardware upgrades he has had (which quickly summarized said that better hardware performs better only the extent that it is faster/better), but the real speed of a site comes from good programming, and this article seems like it should sum up some of your site programming questions quite well.
Hardware is cheap. Programming is expensive.
There are some programming techniques to make sure your code can handle multiple simultaneous views/updates. If you're using an existing framework, much of that work is (hopefully) done for you, but otherwise you're going to find stuff that worked for a few hundred hits an hour on one server isn't going to work when you're getting hundreds of thousands of hits and you have to deploy multiple load balancing machines.
Well, it is primarily an issue of hardware scaling but there are a few things to keep in mind with respect to the software involved in scaling. For example, if you are on a server farm, you'll need to work with a session management server (either via SQL Server or via a state server - which has implications in that your session variables need to be serializable).
But, in the bigger picture, there are a variety of things that you would want to do to scale to an enterprise level. For example, it becomes particularly important that you abstract out your database calls to a DAL because you may well need to adopt the use of a middleware package for high volume environments.

Resources