Wireless sniffing to catch URL visited by user - Kismet - wireless

I am designing a software to record only the URL visited by students while giving exam. So I somehow need to know which IP is visiting which site.
I will have a list of all the IPs of students. I just need a way to find out what sites they are visiting. For this I tried Kismet and was able to generate the .pcapdump file which has the details of all the packets. The network is open and unsecured so I was able to get the list of all the IPs but couldn't see the URL they visited.
Steps:
OS: Backtrack Linux 5
Start Kismet on wlan0
Run the following command to convert .pcapdump to .txt through tshark
$ tshark -r /path/Kismet.pcapdump >> log.txt
Read log.txt for IP -- This shows all the student IPs But need to get the URL visitd by those IP too.
Is kismet the right way to go? I have to automate this whole thing so I cannot use Wireshark and manually convert the files so I choose Kismet.
I need to be able to generate alert or some other activity as soon as a URL (like www.google.com) is visited by any of the IP in the database.

I believe you would want to look like squid/squidguard type of setup if you want to let your students to access only certain "white-listed" sites during the exam duration. It can be done for the IP Addresses of the student's PC (If they are static) or you can create username/passwords and apply the rule on them.

I think you want to use something more like dsniff's urlsnarf. There's some good tutorials on the internet on how to use it (check the backtrack forums)

Related

Acquiring the ping of a certain user.[Discord.Net]

I cannot find the ping for a certain user in usereventargs no matter how I look for it. Is there a way to get it from somewhere?
For clarification: I made a bot that has to react in a certain way to the ping of a certain user that I have identified (for example he used a command and I know his name).

How do I create a Tasker program that will open a browser URL only at certain time of the day?

I am anew to Tasker and the like. I have a passion for learning programming skills. Here I wanted to create a task where in the chrome browser would run at any point of time of day, however I would not be able to run any social networking websites at every point of time except certain hours. How do I go about creating such an app.
I am running Tasker from Moto G 2014.
there are two ways of doing it:
rooted phone
you'd need to edit hosts file located at /system/etc
in order to block access to chosen URL (let's choose google.com as an example), add following:
127.0.0.1 google.com
what will it do? redirect all google.com requests to localhost
after changing hosts file, restart your phone for the changes to take effect
non-rooted phone
Although I have personally never used it, I suppose you could install an app called Servers Ultimate which would run as a service on your phone, allowing you to turn predefined servers on/off (on the fly) using built-in tasker plugin.
You'd setup there your own DNS server and set a list of (dis)allowed IPs, so whenever you start that DNS, the rules would kick-in, restricting acces to chosen URLs
I know that this is an old thread, but couldn't you create a time based profile to run every hour or at whatever point you choose? All you would then need to do is create a task that uses the Net category, then the Browse URL category. Enter your URL and the page will open.

Bash/Syslog referer Address

I have a syslog server, but I need to find out the URL that has been used to reach my server. For example I have syslog1.example.com and syslog2.example.com, both of which hit the same server, then in the conf file I will filter which URL was set and update a database field based on this value.
I have added fields to the database and using the conf file I am able to manipulate the request, but I need the referer URl. Does anyone have any idea how I can get this.
Obviously I have managed everything else, but have tried little towards this part of the task as I need to know if it is possible and my searches are bringing up results based on Curl which is not what I need. If anyone knows how I can get the URL, it would be most appreciated.
Update
I have a device which has busybox and syslogd installed. I am able to set an address on the device for the syslog, for example 1.1.1.1:514. But I am not able to do anymore on the device other than this.
I have for example 100 devices, 50 are type A and 50 are type B. The issue is that when using 1.1.1.1:514 in every device as the syslog server address, when receiving the device information on the remote syslog server I am unable to differentiate whether the device is type A or type B.
I have the following solution in mind, although there may be another way to achieve this. If I create two subdomains and point them to the same address, ie typea.example.com and typeb.example.com, then in theory in devices with a type A I will set the remote syslog address to typea.example.com:514 and for type B, typeb.example.com:514. Both of these subdomains will point to 1.1.1.1, therefore the syslog information is now being received by devices of Type A and devices of Type B.
I need to now figure out how to in the syslog.conf on the remote server, find out whether the information was received by a device using typea.example.com or typeb.example.com.
The server does not have apache installed etc, however, in PHP for example we can use $_SERVER and normally I would be able to retrieve this information from, $_SERVER['HTTP_HOST']. Is there any way to do this in the syslog.conf on the remote syslog server.
As mentioned this is one solution I have in mind and it may not be the only one. Syslog is new to me and I am currently wrapping my head around this. From what I understand rather than variable or parameters, syslog uses macros. None of the macros provided seem to give me the information I need.
I know I can also set the syslog by doing say
syslogd -R 1.1.1.1:514
Is there anyway here I can include further information for example:
syslogd -R 1.1.1.1:514 type=a
Then I could say use $TYPE to get the value or alternatively add a custom header with the syslog.
As you can likely tell I am racking my brains for solutions and hitting brick walls. Any solution or docs that may point me in the right direction would be greatly appreciated.
Final point would be to mention I am also looking at redirecting the syslog info to a PHP script or a C (I'll say script but I know I am wrong) in order to vet the information there and then insert into the DB.
Quite simply I need a way to differentiate by type A or type B. Hopefully this clears matters up and these are not just the ramblings of a mad man.

Scanning to finds active working webservers (nmap?)

I need an nmap command or other utility that finds open sites so I can do a survey on them. The problem is that when I use nmap it finds IPs of sites that are not working:
nmap -iR 200 -p 80 > scan.txt
I'd like it to show in the result sites like Google; Amazon; or whatever, they just need to be actual sites with some content on them.
Thx in advance!
I am not sure I got your question but if you have a list of those sites stored in a file you can use the following command:
nmap -iL yourfile -v -oX nmap.xml
This command will store the result on an XML file that should help you gathering the information you need.
However if you do not have a list and you just want to find "working" sites well.... in that case I don't know how you can do that with Nmap. Nmap scans a target. Host (site?) discovery works when you scan a LAN or VPN network but since sites are supposed to be on the Internet your question does not make so much sense. However I repeat, I am not quite sure I understand your question.
EDIT: Ok maybe now I got what you mean, if the problem is Nmap giving you false results you may try to improve the scan with some more aggressive parameter such as -A and -v. Please notice that scan random computers over the Internet (expecially if you do an aggressive scan) may not be exactly legal. Well honestly I don't really know about that but I suggest yout to gather more information before scanning.

How do I get a collection (or a stream) of distinct IP addresses using Tor?

I'm writing a web crawler for academic research. This crawler makes millions of requests that I want to distribute over ten or so IP addresses.
My machine has one IP address already. I can get a second by using Tor. Can I get even more IP addresses out of Tor? Here are ideas (and questions surrounding them) that I have for doing this
Run multiple instances of Tor; each provides an IP address. (But will Tor map more than one or two anonymized IP addresses to my machine?)
Run one instance but for each request change its identity. (But will Tor rate-limit this behavior, as mentioned here?)
Would either of these ideas work, or do the bits in parentheses make them fail? Any other ideas?
Tor relays have rate limits. NEWNYM is limited to 5 second intervals.
If they're not fast enough, a willing botnet or app engine should work.

Resources