How could a (Windows) desktop application be created to monitor the amount of time spent on a particular website?
My first idea was to play with the Host file to intercept requests, log, and proxy. This feels a bit clunky; and I suspect my program would look like malware.
I feel like there is a smarter way? Any ideas?
There is a tool similar to what you are looking for called K-9 Web Protection. It is more used for parents to monitor what their kids are up to when hooked up to the internet. I have installed this for my niece's computer with good results and praises as it blocks, content filter, restrict internet times. This may be OTT for your needs but worth a shot as you can see what sites were visited.
The other, is to use a dedicated firewall monitoring solution such as IPCOP which is a Linux based distribution with a sole purpose in providing a proxy, stateful packet inspection (SPI) firewall, Intrusion Detection System (IDS).
Hope this helps,
Best regards,
Tom.
You could do this by monitoring active connections via netstat, or if you need more advanced data you can install The Windows Packet Capture Library and get any data about network use, and inside your desktop app, find network traffic that relates to 'spending time' on a website (which might just be GET requests for you, but I don't know), and record various statistics as required.
Route the traffic through a scriptable proxy and change the browser settings to point to that proxy.
Related
I want to write a program that can monitor all system HTTP/HTTPS protocols used to open the default browser, and block certain ones, automatically changing certain requested URL into another. The process of changing a URL is simple, but the monitoring and blocking part is quite puzzling.
e.g. When clicking on the URL 'https://example.com/asdf.htm', the request will be blocked by the program and the the Windows system will receive the command of 'http://www.example2.org/asdf.htm' instead and the latter instead of the former URL will be opened by the default browser.
I am an amateur developer and student who do not have much experience in solving such problems.
I searched the web and found someone asked a similar question years ago:
https://superuser.com/questions/554668/block-specific-http-request-from-windows
However, I didn't find any useful advice on coding in the page. Maybe I can use an antivirus program to block certain URLs or change the hosts file to block certain URLs but the URL replacement cannot be done. Certainly, changing the hosts to a certain server which redirects certain requests might work but that's too complex. I wish someone can help me solve the problem by giving a simple method on monitoring the Windows system itself. Thanks!
To summarize our conversation in the comments, in order to redirect or restrict traffic, either to sites, either to ports (protocols are actually "mapped" via ports) the main solutions usually are:
a software firewall - keep in mind that SWFW don't usually redirect, they just permit or allow traffic via ports
a hardware firewall (or advanced router, not the commercial ones, but enterprise grade) - they do what you want, but they are very expensive and not worth for a home experiment
a proxy server - this can do what you want
Other alternatives that might or might not work would include editing the hosts file, as you said, but as stated earlier I don't recommend it, it's a system file and if you forget about it, then it can be a hindrance (also keep in mind that normally you should not use a Windows user with admin rights even at home, but that is another story) and a browser extension (which Iwould guess only changes content on pages, not the way a browser works (such as changing URLs).
I think a proxy server is the best pick here. Try it and let me know.
Keep in mind I still recommend you read about networking in order to get a better idea of what you can and can't do in each setup.
I using SuperWebSocket, in a server application win32, framework 4.5, and My clients connect via web, all work fine, but it only for times, because in x hour(I do not know if this is related, but may perhaps be that the server tires), my vps host report me a DDoS attack is detected, and this is a graphics
SuperWebSocket no get nothing in error.log, I wanted to help me, to rule out possibilities
Exhaustion server
malware (note i rebuild the server 1 day ago)
superwebsocket
another possibility
Please how I can prevent this from happening, I think of some way to restart the vps if it detects more than 2000 requests (I think it's normal, if it is not because he is normal). Please I'm desperate
From the information provided it is not possible to make a definitive conclusion in regards to the traffic spikes.
In order to determine the source, I would capture and analyze the traffic which will give you insight into the nature of the traffic and where the traffic is going.
In order to do so, you can either capture traffic directly at the switch via a SPAN capture or if you do not have access to that, capture locally on the machine via something like Wireshark
Interested in people's opinion.
You have an application server running 3/4 services that do lots of TCP based communication to/from the server.
There is also a fairly heafty amount of MSSQL work going on too.
Do you run something like Symantec Anti-Virus with proactive/real time/heuristic/foo protection on the server?
Or do you perform full system scan nightly during a maintenance period?
This is all within the context of performance is of upmost importance.
All comments appreciated.
TIA
No. The attacks that servers and the custom apps running on them are vulnerable to are not the desktop malware problems that anti-virus targets. All AV on a server will do is reduce performance and stability.
(Unless of course the server is also being used as a desktop machine, to browse on and so on. But that's a really bad idea already.)
Depending on what the application is doing AV might have a role to play in that: for example if you've got a user file store as part of one of the apps it wouldn't hurt to check the files uploaded into it for viruses. And of course it's normal for an app that deals with mail to pass incoming mail to a checker.
I'm interested in how you would approach implementing a BitTorrent-like social network. It might have a central server, but it must be able to run in a peer-to-peer manner, without communication to it:
If a whole region's network is disconnected from the internet, it should be able to pass updates from users inside the region to each other
However, if some computer gets the posts from the central server, it should be able to pass them around.
There is some reasonable level of identification; some computers might be dissipating incomplete/incorrect posts or performing DOS attacks. It should be able to describe some information as coming from more trusted computers and some from less trusted.
It should be able to theoretically use any computer as a server, however, optimizing dynamically the network so that typically only fast computers with ample internet work as seeders.
The network should be able to scale to hundreds of millions of users; however, each particular person is interested in less than a thousand feeds.
It should include some Tor-like privacy features.
Purely theoretical question, though inspired by recent events :) I do hope somebody implements it.
Interesting question. With the use of already existing tor, p2p, darknet features and by using some public/private key infrastructure, you possibly could come up with some great things. It would be nice to see something like this in action. However I see a major problem. Not by some people using it for file sharing, BUT by flooding the network with useless information. I therefore would suggest using a twitter like approach where you can ban and subscribe to certain people and start with a very reduced set of functions at the beginning.
Incidentally we programmers could make a good start to accomplish that goal by NOT saving and analyzing to much information about the users and use safe ways for storing and accessing user related data!
Interesting, the rendezvous protocol does something similar to this (it grabs "buddies" in the local network)
Bittorrent is a mean of transfering static information, its not intended to have everyone become producers of new content. Also, bittorrent requires that the producer is a dedicated server until all of the clients are able to grab the information.
Diaspora claims to be such one thing.
Can you suggest how to create a test environment to simulate various types of bandwidths and traffic in a web app?
Or maybe an open source program which does this against localhost?
I think this is a very important subject when programming web apps but it is not a usual topic, the only way i can imagine to create such kind of environment is to use some kind of proxy in a local network but before start looking into the squid documentation i would like to hear your suggestions.
if you're using apache you may want to take a look at apache ab
There are two approaches to shape network traffic to simulate a network link:
Run some software on the client or server that sits somewhere in the networking stack and shapes the traffic between the app and the network interface
Run the traffic shaping software on a dedicated machine with 2 network interfaces through which your traffic is routed
(2) is a better solution if you don't want to install software on the client or server (and possibly impact performance), but requires more hardware fiddling.
Some other features you might want to think about are what shaping parameters can be simulated. Most do delay and packet loss, some do jitter and bandwidth limiting as well. Some solutions can selectively filter traffic (for instance by port number, TCP or UDP etc).
Here is a list of some of the systems I've found:
Open Source or Freeware
DummyNet is an open source BSD Unix-based for dedicated devices. It is not clear if the software is being actively maintained
NistNet is an open source Linux-based system for dedicated devices. The software has not been actively maintained for several years.
Commercial
Apposite Technoligies sell dedicated hardware solutions for simulating WAN links, with a Web based GUI for configuring the settings and collecting traffic measurements
East Coast DataCom sell hardware dedicated simulators for simulating routers and modems
Itrinegy offer both dedicated device solutions, and solutions for running on clients or servers.
Network FX offer several dedicated device products for simulating network impairments between the client & server
NetLimiter is a client side system that allows throttling of individual applications, and includes a firewall.
Shunra Software offer a range of products, from high end enterprise WAN simulation and testing, to a simple client-resident emulator.
The closest I can think of is doing something similar with VEDekstop from Shunra..
Simulating High Latency and Low Bandwidth in Testing of Database Applications
Shunra VE Desktop Standard is a Windows-based client software solution that simulates a wide area network link so that you can test applications under a variety of current and potential network conditions – directly from your desktop.
I wrote a php script awhile back which used CURL to run a sequence of page requests against my server which represented a typical use scenario. I had it output the times that it took for the server to respond to each of the requests. I then had another script which spawned a bunch of these test case scripts simultaneously for a sustained period and correlated the results into a file which I could then look at in a spreadsheet to see average times. This way I could simulate the number of users hitting the site that I wanted. The limitations are that you need to run the test script on a different server to the web server and that the client machine can become too loaded to give meaningful results past a certain point. I've since left the job otherwise I would paste the scripts here.
If you are running a Linux box as your server, Linux box as your client, or have the capability to put (perhaps a VM) a Linux router between your client and server, you can use NetEm.
NetEm is a Linux TC (Traffic Control) discipline which can delay (i.e. add latency) packets leaving a host. Although it's tricky to set up clever rules (e.g. add latency to some traffic, not to others), it's easy to add a simple "delay everything leaving the interface by 50ms" type rules and some recipes are provided.
By sticking a Linux VM between your client and server, you can simulate as much latency as you like. And you can turn it on and off dynamically. Linux has other TC disciplines which can be combined with NetEm to restrict bandwidth (but the script to set this up can be somewhat complicated). NetEm can also randomly drop packets.
I use it and it works a treat :)
Web Application Stress Tool (WAST) from Microsoft is what you need.
http://www.microsoft.com/downloads/details.aspx?familyid=e2c0585a-062a-439e-a67d-75a89aa36495&displaylang=en
I haven't used it for years (lack of need, not because I'd found anything else), but xat webspeed would be the first thing I would point toward
As other people have mentioned, Apache's ab (comes with Apache, so you probably have it already) is good.
Other good options are:
HP's LoadRunner Apache
Jakarta's JMeter
Tsung (if you want to get your erlang on)
I personally like ab and JMeter the best.
We use Loadrunner to do bandwidth and traffic simulation in our App. Loadrunner is can start agents on various machines and you can simulate one machine as running on dialup modem v/s another on DSL v/s another on Cable internet.
We also use Loadrunner to simulate various kinds of traffic conditions from 10 user run to 500 user run. We can also insert think times in the script and simulate a real user executing the http request. The best part is that it comes with a recording studio where it will plug in with Internet explorer and you can record the whole scenario/Usecase that can be as simple as hitting one page to a full blown 50-60 page script or more.
i found this little java program that works great : sloppy
yet not a proffesional solution but it works for simple tests, i guess it uses java streams and buffers to slow down the connection .
Have you looked at Tsung? It's a great utility for seeing if your website will scale in event of attack, I mean massive popularity. We use it for our web frontend, and our internal systems too.
If you're interested in performing your tests out of your browser, there is also a really great Firefox plug-in.
Do not forget about Wanulator (http://www.wanulator.de/).
The name Wanulator comes from "WAN" and "simulator. This pretty much describes what the software does: It simulates different Internet conditions such as delay or packet loss. Furthermore it simulates user access line speeds e.g. modem, ISDN or ADSL.
Wanulator is currently packaged as a Linux boot CD based on SLAX. This will give you a full out of the box experience. You can turn any PC into a test-system within a blink - just by booting the Wanulator CD. The package already includes useful client SW such as web-browser and network sniffer (Wireshark). Nevertheless if the PC has 2 network interfaces the system can run as an intermediate system between your server and your client - as a switch - without any configuration hassles.