nessus scan intepretation based upon on credentials? - nessus

Can someone clearly state the difference between running a nessus scan with/out credentials? What would happen if i scan a unix based system with no credentials and about the same time using ssh account?
How would the results differ> And in what occasions one is preferred over other

Credentialed scanning is preferred to non-credentialed scanning as it is able to run scripts that are executed on the host machine in order to directly identify versions or software that might be vulnerable as well as to check for vulnerabilities that might me present. A non credentialed scan basically makes educated guesses based on network banner grabs and TCP/IP stack information that it observes, in order to find out what vulnerabilities are present.
An uncredentialed scan is equivalent to running around a house and checking the locks on the doors/windows by attempting to open it. On the other hand, a credentialed scan is like having the key to the house, so that you can examine the locks from the inside of the house and see what type of lock it is, whether it is susceptible to vulnerabilities or not, and who has a copy of the keys.
Credentialed scans provide much more information on the systems but require much more coordination and effort then a simple non credentialed scan. It also requires a level of trust between the scanning host and the target host.

You might want to go ahead with Unauthenticated scans in case of Black-box testing, where you have no information about the target in your scope. This may lead to a lot of False Positives.
However, in case of White-box/Grey-box testing, you should go ahead with Credentialed scans. This will also rule out the possibilities of getting false positives and will give a comprehensive report of findings

Related

Does zmq need integrity checks in application layer?

I'm building a distributed application using ZMQ framework that needs to assure the integrity of the packages exchanged. My question is whether or not do I need to perform integrity checks on the client and server on the application layer.
I have implemented a checksum approach using MD5 hash in both client's and server's side. However, I suspect that this might be redundant since zmq might be already handling integrity checks in the background. I have read ZMQ - The guide and found scarce information on this matter rather than small references that indicate that zmq already does integrity checks:
It delivers whole messages exactly as they were sent, using a simple
framing on the wire. If you write a 10k message, you will receive a
10k message.
I also searched in forums, including SO and couldn't found any solid reference that could confirm the reference. I would appreciate if someone could confirm it and ideally include a useful source.
EDIT
I am looking for answers other than "trust the docs" or "implement checksums" for two reasons:
I think that there need to be clear and easy-to-find references to what seems to be one of the key selling points of ZMQ.
The system under design must be fast, thus not wasting time in redundant ops.
Looking at the documentation i read it as the entire purpose of ZeroMO is reliable transmission.
I'd say that it depends on what you're worried about most.
ZMQ is built on top of tcp and other protocols, and these in turn rely on underlying things like IP, Ethernet. When you start getting down towards these physical layers of a network stack, there's integrity checking built in to provide reliable services to the layers above (include application libraries like ZMQ). So, in ordinary circumstances, one would not need to put in your own integrity checks because that's already taken care of for you. ZMQ does not do anything extra so far as I know - it simply assumes the underlying network stack always delivers bytes properly, intact.
However, such underlying integrity checks do not guarantee to eliminate all bit errors, they just get the bit error rate up to some very good level where most applications don't care (e.g. 1 in 10^12) and probably never, ever experience a problem. Adding in a supplementary checksum is going to push the net b.e.r. to even more very safe levels.
If you're worried about some active attack by a malicious third party against the zmtp protocol itself, then you may wish to introduce your own integrity check, likely cryptographic in nature, with all that entails. This might involve using libsodium along with ZeroMQ. That certainly was a thing, and probably still is unless I'm out of date and have missed a deprecation notice.
Summary
I'd say:
Ordinary app, runtime of days, weeks - nothing extra needed
Very long running app (years) and bit errors are utterly unacceptable (e.g. a safety critical application) - add a checksum
Needs to operate in a hostile environment where protocol attacks may occur - add a strong encryption layer like libsodium.
The ZeroMQ (ZMQ) messaging library does not require integrity checks in the application layer. ZMQ provides a set of low-level communication protocols that enable applications to exchange messages with each other in a fast and efficient manner. ZMQ does not include any built-in mechanisms for integrity checks, such as error correction or checksum calculations, at the application layer.
However, this does not mean that applications built on top of ZMQ do not need integrity checks. Depending on the specific requirements and goals of the application, it may be necessary to implement integrity checks in the application layer to ensure the correctness and reliability of the data being exchanged. For example, an application may want to include checksum calculations or error correction mechanisms in order to detect and correct errors in the transmitted data.
In general, the decision to include integrity checks in an application built on top of ZMQ will depend on the specific requirements and goals of the application, as well as the trade-offs between performance, reliability, and complexity. It is up to the developers of the application to determine whether integrity checks are necessary, and to implement appropriate mechanisms to ensure the integrity of the data being exchanged.

pentest-verify checklist after cheked

After pentesting and checking the check-list, how can I reassure my client that these checks are done and vulnerabilities patched? (of course for something like sqli, showing is obvious)
But I mean somewhere to verify or something like this?
Thanks
For test checks that are done you can provide different reports generated by tools or manual testing (depending on vulnerability type) for those specific checks.
While for patched vulnerabilities, you will need to re-test the platform again and provide the changed reports either generated from tools or manual testing that will show different output by indicating the vulnerability is no longer present.
For further re-assurance you can also add the vulnerability exploitation reproducing steps on the report. So if the client wants to test it themselves they can do it (and get assured that it was fixed).
You need to describe all methodologies used like OSSTMM, OWASP, NIST. Is very important too talk about the perimeter tested (web like forms, api, frameworks, network protocols,etc).
However, you can create a topic every step tested using Top10Owasp:
Broken Authentication
Sensitive data exposure
XML External Entities (XXE)
Broken Access control
Security misconfigurations
Cross Site Scripting (XSS)
Insecure Deserialization
Using Components with known vulnerabilities
Insufficient logging and monitoring
This way you ensure that your test was compliance.

Why is caching usually disabled in test environments?

On our applications we have a lot of functional tests through selenium.
We understand that it is a good practice to have the server where the tests are ran as similar as possible to the production servers, and we try to follow it as much as possible.
But that is very hard to achieve in 100%, so we have a different settings file for our server for some changes that we want in the staging environment (for example, we opt to turn e-mail sending off because of the additional required architecture).
In fact, lots of server frameworks recommend having an isolated front-controller (environment) for testing to easily achieve this small changes.
By default, most frameworks such as ours recommend that their testing environment should have its cache turned off. WHY?
If we want to emulate production as much as possible, what's the possible advantage of having the server's cache turned off when performing functional tests? There can be bugs that are only found with the cache on, and having it on might also have the benefit of accelerating our tests execution!
Don't we just need to make sure that the cache is cleared before starting a new batch of functional tests, the same way we clear the cache when deploying a new version to production?
A colleague of mine suggests that the reason for this is could be that cache can generate false-positives, errors that are not caused by badly implemented features (that are the main target of those tests), but of the cache system itself... but even if those really happen (I suppose it depends on how advanced is the way the cache is used), why would they be false-positives?
To best answer this question I will clarify some points.
(be aware that this is based on my experience)
Integration tests using the browser are typically "Black Blox Tests" , which means that they are made ​​without knowledge of the code. That is, without knowing whether the cache is being used or not.
These tests are usually designed based on certain tasks that are performed during the normal use of the system. But, these tasks are chosen for automation depending on certain conditions of use (mainly reusability, and criticality/importance but also the cost of implementation). So most of the times, we will not need/wont to test caching behaviour.
By convention, a test (any) must be created with a single purpose and have the less possible dependencies. Why?
When the test fails , we can quickly find the source of the failure.
Smaller tests are easier to extend, fix, remove...
We do not spend too much time, first debugging the test code and then
debugging the system code.
Integration testing should follow this convention.
Answering the question:
If we want to check a particular task, we must isolate it as possible.
For example, if we want to verify that the user correctly logs in, we have to delete the cookies to be sure that they do not influence the result (because they may). If on the other hand, we want to test the cookies we have to somehow use an environment where they are not deleted.
so, in short:
If there is need to test the caching behaviour then we need to create an "isolated" environment where this is possible.
The usual integration tests purpose is to test the functionality, so the framework default value it's to have the cache disabled.
This does not means that we shouldn't create our own environment to test the caching behaviour.

Network Settings for JMeter Traffic | Internet or LAN?

I'm going to perform a load and stress test on a webpage using Apache JMeter, but I'm not very sure about the appropriate network setting. Is it better to connect the two machines, the server with the webpage and the client running JMeter via local network or via the internet. Using the internet would be closer to the real scenario, but with a local network the connection is much more stable and you have more bandwidth for more requests and the same time.
I'm very thankful for opinions!
These are in fact two styles or approaches to load testing, both are valid.
The first you might call Lab testing. Where you minimise the number of factors that can affect throughput/resp. times and really focus the test on the system itself.
The second is the more realistic scenario where you are trying to get as much coverage as possible by routing requests through as many of the actual network layers that will exist when the system goes live.
The benefit of method 1 is that you simplify the test which makes understanding and finding any problems much easier. The problem is you lack complete coverage.
The benefit of method 2 is that it is not only more realistic but it also gives a higher level of confidence - esp. with higher volume tests, you might find you have a problem with a Switch or Firewall and it is only with this type of testing that you identify such issues. The problem is it can make finding any issues harder.
So, in short, you really want to do both types. You might find it easier to start with the full end to end test from outside in, and then only move to a more focused test if you find that you need to isolate / investigate a problem. That way you stand a chance of reducing the amount of setup work whilst still getting the maximum benefit from testing.
Note: Outside in means just that, your test rig should be located outside of the LAN (assuming this is how live traffic will flow). These days this is easy to setup using cloud based hardware.
Also note: If the machine that you are running the tests from is the same in both cases then routing the traffic via the internet (out of your LAN and then back in again) is probably not going to tell you anything useful and could actually cause a false negative in your results (not to mention network problems for your company!)
IMHO you should use your LAN.
Practically every user will have slightly different dl/ul speed, so I suggest you first do a normal performance test using your LAN and when you finish, you can do a few runs from outside, just to see the difference.
Remember, you're primarily testing efficiency of your application on the hardware it sits on. Network speed (of your future users) is the factor you cannot influence in any way.

How to implement a secure distributed social network?

I'm interested in how you would approach implementing a BitTorrent-like social network. It might have a central server, but it must be able to run in a peer-to-peer manner, without communication to it:
If a whole region's network is disconnected from the internet, it should be able to pass updates from users inside the region to each other
However, if some computer gets the posts from the central server, it should be able to pass them around.
There is some reasonable level of identification; some computers might be dissipating incomplete/incorrect posts or performing DOS attacks. It should be able to describe some information as coming from more trusted computers and some from less trusted.
It should be able to theoretically use any computer as a server, however, optimizing dynamically the network so that typically only fast computers with ample internet work as seeders.
The network should be able to scale to hundreds of millions of users; however, each particular person is interested in less than a thousand feeds.
It should include some Tor-like privacy features.
Purely theoretical question, though inspired by recent events :) I do hope somebody implements it.
Interesting question. With the use of already existing tor, p2p, darknet features and by using some public/private key infrastructure, you possibly could come up with some great things. It would be nice to see something like this in action. However I see a major problem. Not by some people using it for file sharing, BUT by flooding the network with useless information. I therefore would suggest using a twitter like approach where you can ban and subscribe to certain people and start with a very reduced set of functions at the beginning.
Incidentally we programmers could make a good start to accomplish that goal by NOT saving and analyzing to much information about the users and use safe ways for storing and accessing user related data!
Interesting, the rendezvous protocol does something similar to this (it grabs "buddies" in the local network)
Bittorrent is a mean of transfering static information, its not intended to have everyone become producers of new content. Also, bittorrent requires that the producer is a dedicated server until all of the clients are able to grab the information.
Diaspora claims to be such one thing.

Resources