We recently had a 3rd party auditor perform a penetration test on our MS 2008 webserver that uncovered remote OS detection vulnerability. It detected the OS as well as version of IIS.
The Auditors recommended: "if Possible, configure the web server so that it does not present identifiable information in the banners"
I've done quite a bit of research and I could not find any easy way that will allow me to quickly block this information from being detected.
Does anyone know of any way to do this? Is this something that needs to be configured/denied on the server level or web application level within the code?
URLScan is what you'd use back in the IIS 6 days, not sure if it still works with IIS 7 or 7.5. This is a bit of security by obscurity, and quite honestly most attacks spray everything they have at you and don't care if you are presenting yourself as IIS, they'll throw apache attacks at you or vise-versa.
On top of that there are plenty of things, besides banners that give the server away. The order at which they present their info in the header is different between IIS, Apache, Weblogic, etc. httprint is one such utility: http://net-square.com/httprint/
On top of that you have programs like Satori and p0f that do passive OS identification based on the TCP stack and/or other means.
So yeah, go back to the auditor and ask them what specifically they are recommending, and why! Taking out the extremely low hanging fruit of the banner is one thing, but honestly, unless you have a script kiddie, with a script that ONLY looks at banner information, you aren't protecting yourself from much of anything.
Related
Our distributes application uses Microsoft RPC for interprocess communications. Starting with Windows XP SP2 and Windows 2003 SP1 Microsoft tightened the bolts so now the programs on two different computers can't communicate that easily.
Either they both must be running under suitable user accounts so that uathentication succeeds or the RPC server must "open the hole" by calling RpcServerRegisterIf2() with RPC_IF_ALLOW_CALLBACKS_WITH_NO_AUTH flag to allow unathenticated calls as prior to the "tightening" change.
How safe is the second option? Will it really compromise the computer which is behind a corporate firewall?
Asking this here because it's a program design problem, not setup problem.
Not safe at all. Yes, it will really compromise the computer.
People who assume that attacks or malicious behaviour can only come from outside "the corporate firewall" will eventually be very disappointed :-)
I would never delegate the responsibility for securing my systems or applications to a party outside of my control. That's just asking for trouble.
I see this in the same area as people asking why they need constraints in their databases if their applications will always follow the rules, not realising that all it takes is some rogue with a JDBC driver and JRE, or even a buggy release of your application, to bring down the whole house of cards.
I would have thought that, in a corporate environment, all users would be centrally maintained anyway (e.g., AD) so that the issue with authentication would be minimal.
I am about to travel to Europe (I'm Australian but imagine this is a similar circumstance for US users and simply flipped for European users).
However, there is the slim possibility I will need to do some Visual Studio work while I'm travelling.
As I see it I have three options:
Leave a desktop PC on at home, access remotely via net cafes.
Carry a laptop with me on the trip, upload files as required using public wifi.
Option 2 but instead buy cheap light netbook that is miraculously capable of running VS.
Does anyone have any experience or advice to shed on any of these options?
For reference, this existing post suggests that VS remotely for short distances is okay, but over longer distances could be more problematic. I've used VS via RDP to a US server before and it was pretty laggy but for small changes I could get by.
Concerns I have that you may have some experience with:
Weight of luggage (ideally like to travel light)
Security of laptop (imagine it'll be too heavy to carry around all the time so have to leave it at hotel/hostel etc. and hope for the best)
Security of data (don't want someone stealing RDP access to my home PC)
Security of FTP (don't want someone stealing FTP passwords over wireless)
I'd go with option #2 (carry a laptop that can run VS).
This way you can use the "more convenient" method if it works well (use it as a RDP client if the connection is low-latency enough), but you can still work locally if the connection you find is not reliable.
I think the bottom line is, always have a backup method when depending on networks that are far away and beyond your control.
Edit: Regarding the additional security concerns, most of those are things you should deal with anyway, traveling or not. If the stuff you're working with is that sensitive, you should probably improve the security of your remote work environment with a VPN and more secure file transfer method. Before you take your laptop anywhere, know what your plan is if you were to lose it.
It's a vacation. How do you expect to rest up properly if you're always worrying about work. Leave the phone at home too.
I used to leave a home PC on with VS and use services like GoToMyPc or LogMeIn or some similar service.
Since I have started using a laptop, I just carry the thing with me with VPN connectivity on business trips along with a 3G data card.
But seriously, if on vacation, I do not want to take my laptop with me.
security
First and foremost, encrypt the contents of the HDD - be safe.
If I am on a business trip, the laptop is with me so I am not as concerned with where it is. If I am on vacation, I do not know that I want to take one with me.
If is important then I would keep my laptop/pc at work ON and there will be someone that has access to turn it on/reboot it. So I would carry a light laptop that lets me connect and work if I need it. If that goes down, I can always head into a cybercafe.
database
If you are anticipating working, bring your dev database with you. I know it hogs space and memory (while in use), but it pulling data over the wire has taken long enough to make me lose concentration.
standalone
Make the laptop standalone so that it can work without a connection to VPN or internet - coverage is not the best / uniform in all areas.
Use TrueCrypt for encrypting your harddisk. Use VPN, SSH or something similar for remote connections. I always bring my laptop, but in case I would lose it, it's just a brick for the finder, and I have a good backup system that makes me able to get up and running on another computer quickly.
I tried installing VS2010 on my NetBook and it was a no-go. I was, however, able to install Expression Blend/Web which is good for most tasks.
Edit: To make this more useful... my netbook is HP Mini 1100 Series w/1GB RAM running Windows 7 "Starter"
beware: i don't know where you are going in europe, but do not count on a reliable internet connection in a hotel. it generally works, but when it does not, don't count on the personnel to repair it. of course, if you also carry your own connection (G3 or EDGE on your mobile phone), then this will not be a problem.
I suggest using the option 2 when working on your source code.
I also recommand using Git so you can work with a source control while being disconnected from the office source control. When you get an access, you can sync your whole repository with your office repository.
Of course, it all depend on which source control provider you are using.
For the occasional stuff that are not on Git, use a VPN for enhanced security.
My experience:
1) Purchased a small netbook (Samsung netbook with 2gb or so of RAM, I can lookup exact model number if anyoned interested but I think it's comparable to, or just above the NC10 (just comment if interested)).
2) Internet is bad in Europe (at least the options available to trav ellers). Something to note.
3) The netbook performance was absolutely fine. You don't want to be doing too much dev because of the small screen (though it was only really an issue for me because I got sick of the trackpad and didn't have a separate mouse) but it's honestly pretty fast and easy to use for .NET MVC development in Visual Studio.
I will apologize in advance as this post is born out of severe frustration.
I have a classic asp website that has been running on Windows 2000/IIS5 for years, and another ASP.NET 2.0 site that we've recently started running on the same servers. So far, everything is running well.
Last year, I tried upgrading (fresh install) to Windows 2003/IIS6. The classic ASP site was much slower, about 50% slower based on logs/stats averaged over weeks of use. I tried everything to find out what was slow. Network tweaks. Integrated. Classic iis5 mode. In process. Out of process. Nothing ever made things better and I soon rolled back to IIS5/2000. The very day rolled back, performance went right back to where it was. This happened on more than one server. Eventually, I gave up and chalked it up to 2003 TCP issues of some sort.
I recently installed a Windows 2008/IIS server on a similar, but more powerful machine in hopes that things were better. Much to my happiness, my ASP classic app is faster under Windows 2008. Unfortunately, my ASP.NET app is 50-75% slower for now apparent reason. All of it's content loads. It's on the same network as the 2000 machine. The site was copied directly from the other machine, and it's a precompile web app from studio 2005.
While the page does hit the database and another server for initial data, it caches it from there for quite a while, it also uses the same db servers as the classic site, which is fast, so I know it's not necessarily a connection issue.
I've tried the default app pool and the classic .NET pool Made no difference. Upped./check the max threads, max per cpu in all the usual locations, web garen or no nothing seems to matter. I've double checked that the compilation debug=false is still set in the web.config.
For a quick benchmark, I used ab.exe (Apache Bench) to send 10 request, 1 at a time. Even if I use IE or Firefox to hit the site, it's clearly slower than under 2000, even according to firebug.
At this point, I'm frustrated and at a complete loss as to where to start. Has anyone been through this sort of mess before?
Speed depends on many factors. You do need to measure performance just on the server to understand if this is server issue. Enable tracing for your web site in web config and see which part/function is slowing it down. You can add you own tracing after each operation to see which block of code is slowest. I'm sure you will find things that you can improve/optimize once you which part of the page is the slowest.
In my case, the answer turned out to be simple one I fired up WireShark. There was 1 external resource request that could not be resolved since the test machine had no direct access to the internet like the live machine did.
It's always the little things.
What good tools can I use to monitor IIS. What is included seems to be not as useful as I'd like. I realize I can add performance counters; those don't tell me very much; it's just a collections of properties that are plugged into a generic graphing tool.
I have problems with old legacy applications hanging and various and sundry other things. Also, when I need to get basic information like how many connections I have in IIS and their details I don't know what to do. I've googled extensively and I cannot find much. I find some log parsers, but I want real time. I found some commerical tools that don't really seem quite what I want, besides I'd like to find something free. This is very basic stuff that is pretty easy to get in Apache. I found IISTracer but I am a bit skeptical of it; I did install and try it out. Is there anything else? Some of these legacy applications are classic ASP so just a CLR Profiler isn't what I'm looking for, although those are handy.
EDIT: Is IISTracer really the only tool like this out there for IIS?
Tools for Troubleshooting IIS 6.0
IIS Request Monitor (IIS 6.0)
One of the techniques for tracking down and mitigating problems with badly behaving sites is to use Application Pools. This article shows how to set one up.
It has also been mentioned here on SO:
What causes an application pool in IIS to recycle?
Pros and cons of having dedicated application pools over keeping web applications in one default app pool
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
As developers, we believe that not having local administrative access is going to severely handicap our productivity. We will be restricted from running IIS (we’re a web development shop), installing applications, running Microsoft power tools, etc. If you’re going through the FDCC process now, it would be great to hear how you are coping with these changes.
Having actively worked as a contract developer at a base that uses the AF Standard Desktop, I can tell you a few things.
1: and most important. Don't fight it and don't do what the first person suggested "and let them choke on it". That is absolutely the wrong attitude. The military/government is fighting lack of funding, overstretched resources and a blossoming technology footprint that they don't understand. The steps they are taking may not be perfect, but they are under attack and we need to be helping, not hindering.
OK, that off my chest.
2: You need to look at creating (and I know this is hard with funding the way it is) a local development lab. Every base that I have worked at has an isolated network segement that you can get on that has external access, that is isolated from the main gov network. You basically have your work PC for e-mail, reports etc.. that is on the protected network. But, you develop in your small lab. I've had a lab be 2 PCs tucked under my desk that were going to be returned during a tech refresh. In other words, be creative with making yourself a development machine +servers that are NOT restricted. Those machines are just not allowed to be connected to the main lan segment.
3: Get the distributions of the desktop configurations. Part of your testing needs to be deploying/running on these configurations. Again, these configurations are not meant for development boxes. They are meant to be the machines the people use for day to day gov work.
4: If you are working on web solutions, be very aware of the restrictions on adding trusted sites, ActiveX components, certs, certain types of script execution that the configuration won't allow. Especially if you are trying to embed widgets/portlets/utils that require communications outside the deployed application domain.
5: Above all remember that very few of the people you work for understand the technology they are asking you to implement. They know they want function X but they want you to follow draconian security rule Y while achieving it. What that usually means is that the "grab some open source lib or plugin and go" is not an option. But, that is exactly what your managers think you are going to do because of the buzz around rapid development.
In summary, it's a mess out there. Try to help solve the problem.
While I've never been through the FDCC process, I once worked for a U.S. defense contractor who's policy was that no one had local administrative access to their machines. In addition, flash drives and CD-ROMs were disabled (if you wanted to listen to music on CDs, you had to have a personal CD player with headphones).
If you needed software installed you had to put in a work order. Someone would show up at your desk with the install media, login to a local admin account, and let you install the software (the reasoning being that you knew what to install better than they did). Surprisingly, the turnaround was pretty quick, usually around 1/2 an hour.
While an inconvenience, this policy didn't really cripple us. We were doing a combination of Java, C++ (MS Visual C++ and GNU/C++), VB 6.0 and some web development. For what little web development we did, we had a remote dev box we would RDP into for testing. Again, a bit of an inconvenience, but it didn't stop us from getting our jobs done.
Without ever having had the problem, today I'd probably try a virtualising solution to run these tools.
Or, as a friend of mine once opined: "Follow the process until They choke on it." In this case this'd probably mean calling the helpdesk each time you needed to have a modification to your local IIS config or you'd needed one of the powertools started.
From what I can tell FDCC is only intended to be a recommended security baseline. I'd give some push back on the privileges that you require and see what they can come up with to accommodate your request. Instead of saying I need to be a local administrator, I'd list the things that you need to be able to do and let them come up with a solution that works (which will likely to be to let you administer your machine or a VM). You need to be able to run the debugger in Visual Studio, run a local web server (Cassini), install patches/updates to your dev tools on your schedule, ...
I recently moved to a "semi-managed" environment with SCCM that gets patches installed on a regular basis from a local update repository. I was doing this myself, but this is marginally more efficient for the enterprise and it makes the security office happy. I did get them to put me, and the other developers, in a special collection so that we could block breaking changes if needed (how could IE7 be a security update?). Not much broke except that now I need to update Windows Defender manually since I updated it more frequently than they do in the managed collection! It wasn't as extreme as your case, obviously, but I think that is, in part, due to the fact that I was able to present the case for things that I needed to do for my job that required more local control.
From the NIST FAQ on Securing WinXP.
Should I make changes to the baseline settings? Given the wide
variation in operational and technical
considerations for operating any major
enterprise, it is appropriate that
some local changes will need to be
made to the baseline and the
associated settings (with hundreds of
settings, a myriad of applications,
and the variety of business functions
supported by Windows XP Systems, this
should be expected). Of course, use
caution and good judgment in making
changes to the security settings.
Always test the settings on a
carefully selected test machine first
and document the implemented settings.
This is quite common within financial institutions. I personally treat this as a game to see how much software I can run on my PC without any admin rights or sending requests to the support group.
So far I have done pretty well I have only sent one software install request which was for "Rational Software Architect" ('cos I need the plugins from the "official" release). Apart from that I have perl, php, python, apache all up and running. In addition I have jetty server, maven, winscp, putty, vim and a several other tools running quite happlily on my desktop.
So it shouldnt really bother you that much, and, even though I am one of the worst offenders when it comes to installing unofficial software I would recommend "no admin rights" to any shop remotly interested in securing their applications and networks.
One common practice is to give developers an "official" locked down PC on which they can run the official applications and do their eMail admin etc. and a bare bones development workstation to which they have admin rights.
Not having local administrative access to your workstation is a pain in the rear for sure. I had to deal with that while I was working for my university as a web developer in one of the academic departments. Every time I needed something installed such as Visual Studio or Dreamweaver I had to make a request to Computing Services.