Scanning to finds active working webservers (nmap?) - shell

I need an nmap command or other utility that finds open sites so I can do a survey on them. The problem is that when I use nmap it finds IPs of sites that are not working:
nmap -iR 200 -p 80 > scan.txt
I'd like it to show in the result sites like Google; Amazon; or whatever, they just need to be actual sites with some content on them.
Thx in advance!

I am not sure I got your question but if you have a list of those sites stored in a file you can use the following command:
nmap -iL yourfile -v -oX nmap.xml
This command will store the result on an XML file that should help you gathering the information you need.
However if you do not have a list and you just want to find "working" sites well.... in that case I don't know how you can do that with Nmap. Nmap scans a target. Host (site?) discovery works when you scan a LAN or VPN network but since sites are supposed to be on the Internet your question does not make so much sense. However I repeat, I am not quite sure I understand your question.
EDIT: Ok maybe now I got what you mean, if the problem is Nmap giving you false results you may try to improve the scan with some more aggressive parameter such as -A and -v. Please notice that scan random computers over the Internet (expecially if you do an aggressive scan) may not be exactly legal. Well honestly I don't really know about that but I suggest yout to gather more information before scanning.

Related

Bash/Syslog referer Address

I have a syslog server, but I need to find out the URL that has been used to reach my server. For example I have syslog1.example.com and syslog2.example.com, both of which hit the same server, then in the conf file I will filter which URL was set and update a database field based on this value.
I have added fields to the database and using the conf file I am able to manipulate the request, but I need the referer URl. Does anyone have any idea how I can get this.
Obviously I have managed everything else, but have tried little towards this part of the task as I need to know if it is possible and my searches are bringing up results based on Curl which is not what I need. If anyone knows how I can get the URL, it would be most appreciated.
Update
I have a device which has busybox and syslogd installed. I am able to set an address on the device for the syslog, for example 1.1.1.1:514. But I am not able to do anymore on the device other than this.
I have for example 100 devices, 50 are type A and 50 are type B. The issue is that when using 1.1.1.1:514 in every device as the syslog server address, when receiving the device information on the remote syslog server I am unable to differentiate whether the device is type A or type B.
I have the following solution in mind, although there may be another way to achieve this. If I create two subdomains and point them to the same address, ie typea.example.com and typeb.example.com, then in theory in devices with a type A I will set the remote syslog address to typea.example.com:514 and for type B, typeb.example.com:514. Both of these subdomains will point to 1.1.1.1, therefore the syslog information is now being received by devices of Type A and devices of Type B.
I need to now figure out how to in the syslog.conf on the remote server, find out whether the information was received by a device using typea.example.com or typeb.example.com.
The server does not have apache installed etc, however, in PHP for example we can use $_SERVER and normally I would be able to retrieve this information from, $_SERVER['HTTP_HOST']. Is there any way to do this in the syslog.conf on the remote syslog server.
As mentioned this is one solution I have in mind and it may not be the only one. Syslog is new to me and I am currently wrapping my head around this. From what I understand rather than variable or parameters, syslog uses macros. None of the macros provided seem to give me the information I need.
I know I can also set the syslog by doing say
syslogd -R 1.1.1.1:514
Is there anyway here I can include further information for example:
syslogd -R 1.1.1.1:514 type=a
Then I could say use $TYPE to get the value or alternatively add a custom header with the syslog.
As you can likely tell I am racking my brains for solutions and hitting brick walls. Any solution or docs that may point me in the right direction would be greatly appreciated.
Final point would be to mention I am also looking at redirecting the syslog info to a PHP script or a C (I'll say script but I know I am wrong) in order to vet the information there and then insert into the DB.
Quite simply I need a way to differentiate by type A or type B. Hopefully this clears matters up and these are not just the ramblings of a mad man.

How does a port listening works?

I was wondering how a port listening works. I can only image a loop which always looks for "something" new. But that looks very inefficient to me
If it helps: My concrete problem is that I have two computers and a server. The first computer creates data and stores it on the server. My program - I want to program on the second computer - now should always read the new file on the server when it is been created. The data creating software is written in LabVIEW and my program is a C++/Qt application. My idea is to listen at a port for the file or just the information to look at the server folder. (There it should be stored anyway)
As an additional question: Should I dig deeper in port listening/understanding or is it comparable efficient to check the server folder for new files every n milliseconds?
You have a couple of options.
1) You could either use inotify (see here) on the server probably (but maybe on your second PC if the filesystem is shared via Samba) to be informed when the file changes and then start processing it. This works on Linux, but you didn't mention your OS/platform.
2) You could use a socket or port to notify your "second" computer that there is new data on the server to process. If you choose this approach, you could test it out at the command line using netcat, or nc as it is sometimes called. It is available for Windows and is installed on pretty much all Linux and OSX distributions. You could simply leave your second computer waiting on a read from a socket that passes it the filename. So your second computer would do:
while :
do
file=$(netcat -l 2000)
echo New file available $file
done
And your server, or first computer would do this when there is a new file:
echo filename | netcat <ip address of second computer> 2000
I chose the port=2000 because ports under 1024 need special privileges, but you can choose any port you like.
Waiting on a port/socket is blocking and doesn't use much CPU by the way.

Wireless sniffing to catch URL visited by user - Kismet

I am designing a software to record only the URL visited by students while giving exam. So I somehow need to know which IP is visiting which site.
I will have a list of all the IPs of students. I just need a way to find out what sites they are visiting. For this I tried Kismet and was able to generate the .pcapdump file which has the details of all the packets. The network is open and unsecured so I was able to get the list of all the IPs but couldn't see the URL they visited.
Steps:
OS: Backtrack Linux 5
Start Kismet on wlan0
Run the following command to convert .pcapdump to .txt through tshark
$ tshark -r /path/Kismet.pcapdump >> log.txt
Read log.txt for IP -- This shows all the student IPs But need to get the URL visitd by those IP too.
Is kismet the right way to go? I have to automate this whole thing so I cannot use Wireshark and manually convert the files so I choose Kismet.
I need to be able to generate alert or some other activity as soon as a URL (like www.google.com) is visited by any of the IP in the database.
I believe you would want to look like squid/squidguard type of setup if you want to let your students to access only certain "white-listed" sites during the exam duration. It can be done for the IP Addresses of the student's PC (If they are static) or you can create username/passwords and apply the rule on them.
I think you want to use something more like dsniff's urlsnarf. There's some good tutorials on the internet on how to use it (check the backtrack forums)

Multiple connections in a single SSH SOCKS 5 Proxy

My fist question here on Stackoverflow: What should I need to do so that the SSH SOCKS 5 Proxy (SSH2) will allow multiple connections?
What I have noticed, is that when I load a page in Firefox (already configured to use the SOCKS 5 proxy), it loads everything one by one. It can be perceived by bare eyes, and I also confirm that through the use of Firebug's NET tab, which logs the connections that have been made.
I have already configure some of the directives in the about:config page, like pipeline, persistent proxy connections, and a few other things. But I still get this kind of sequential load of resources, which is noticeably very slow.
network.http.pipelining;true
network.http.pipelining.maxrequests;8
network.http.pipelining.ssl;true
network.http.proxy.pipelining;true
network.http.max-persistent-connections-per-proxy;100
network.proxy.socks_remote_dns;true
My ISP sucks because during the day, it intentionally breaks connections on a random basis. And so, it is impossible to actually accomplish meaningful works without the need of a lot of browser refresh or hitting F5 key. So, that is why I started to find solutions to this.
The SSH's dynamic port forwarding is the best solution I find to date, because it has some pretty good compression which saves a lot of useless traffic, and is also secure. The only thing remaining is to get it to have multiple connections running in it.
Thanks for all the inputs.
I have had the same thoughts and my conclusion is that it should already have multiple connections going through the socks proxy. This is because if you view the ssh connection with -vvv flag, you'll notice it opening up different ports for the different requests.
I think it may have something to do with SSH-over-TCP itself; plus, perhaps, some extra inefficiencies and/or bugs in the implementations. Are you using only OpenSSH on Mac OS X / *BSD / Linux, or is this PuTTY on Windows?
Your situation is actually pretty much exactly why SCTP was developed (as a TCP replacement), which has a notion of multiple streams from within a single connection.
Hopefully, we'll have SSH over SCTP readily available one day. The best part about SCTP is that it'd still work over IPv4, i.e. it is supposedly mostly a matter of only the endhosts having support for it, so, unlike IPv6, you wouldn't have to wait for your lazy ISP (at leasts, theoretically).

How can I remotely watch logs on Win2003 servers?

(Briefly, like this question but for Windows servers.)
I have several Win2003 servers running custom application services (C/C++, not Java) that write text-based logs in a custom format.
[2009-07-17 12:34:56.7890]\t INFO\t<ThreadID>\tLog message...
[2009-07-17 12:34:56.7890]\t *WARN\t<ThreadID>\tLog message...
[2009-07-17 12:34:56.7890]\t**ERR \t<ThreadID>\tLog message...
I would like to have a way to easily and efficiently (over a not-very-fast VPN) "watch" these logs for lines that match a pattern (like tail -f |grep -E on linux). Ideally the output would need to be aggregated, not one window/shell per file or per server, and a Windows application would be best, so that I can put it in front of people who are command-line-phobic.
Any recommendations?
edit: fixed link
Try using baretail
splunk from www.splunk.com is the way to go. It is free & does exactly what you are asking for.

Resources