CentOS Vs Windows Server 2008 - windows

Apologies if the question appears ambiguous, I have little experience in this area and was after some informed opinions.
I am deploying a test scenario of a server/client network and need to make some choices for Server.
The client will be a Windows system as it meets the requirements for the client, the server choice has more room for selection.
From my experience with Linux in general and the appealing nature of open source for low cost, security etc and the availability and performance of database and web server programs I have been considering CentOS as a server choice.
How well does this operate with Windows clients?
Am I being too selective and creating unnecessary complication by setting out not to use a Windows Server OS?

Well, yes, very ambiguous!
Essentially, your client / server network will be dependent on one thing - clients are Windows, do you want to authenticate using Active Directory? Yes? Then you'll need Windows servers too.
It really depends on what your test scenario is aimed at testing and without significantly more detailed information as to what your end goal is, we'll be at a loss to help you.
This question would actually be better placed on the Server Fault site, unless your ultimate goal is an environment in which to test application development.

Related

Running exe based program on thin client environment (Performance?)

We are evaluating hospital automation softwares for a new hospital. In this hospital we plan to use thin clients on VDA OS. Most of the alternatives that we evaluate are web based. One of the alternative software is a good candidate for us thinking the functions of the software, price and company's support. However this application is designed in client-server architecture.
I am not familiar to VDI. I haven't used it yet. We just plan to use it for the hospital for the first time. I am not sure using client-server architecture is proper for thin clients thinking the performance of the system.
The number of users is important in this concept. We think that around 500+ users will exists. In VDI client (which is in reality stored in the server) every user will work as if he is working in its own pc. But in reality will connect to the server. Using an exe based program in this hardware architecture frightens me . The company says that using exe based program in this hardware architecture will not be a problem. But they don't have a reference working in VDA environment.
I need help in evaluating these hospital automation software from an expert :)
Thank you

Blocking OS fingerprinting windows server 2008 IIS7

We recently had a 3rd party auditor perform a penetration test on our MS 2008 webserver that uncovered remote OS detection vulnerability. It detected the OS as well as version of IIS.
The Auditors recommended: "if Possible, configure the web server so that it does not present identifiable information in the banners"
I've done quite a bit of research and I could not find any easy way that will allow me to quickly block this information from being detected.
Does anyone know of any way to do this? Is this something that needs to be configured/denied on the server level or web application level within the code?
URLScan is what you'd use back in the IIS 6 days, not sure if it still works with IIS 7 or 7.5. This is a bit of security by obscurity, and quite honestly most attacks spray everything they have at you and don't care if you are presenting yourself as IIS, they'll throw apache attacks at you or vise-versa.
On top of that there are plenty of things, besides banners that give the server away. The order at which they present their info in the header is different between IIS, Apache, Weblogic, etc. httprint is one such utility: http://net-square.com/httprint/
On top of that you have programs like Satori and p0f that do passive OS identification based on the TCP stack and/or other means.
So yeah, go back to the auditor and ask them what specifically they are recommending, and why! Taking out the extremely low hanging fruit of the banner is one thing, but honestly, unless you have a script kiddie, with a script that ONLY looks at banner information, you aren't protecting yourself from much of anything.

Is allowing unauthenticated RPC calls okay for servers behind a corporate firewall?

Our distributes application uses Microsoft RPC for interprocess communications. Starting with Windows XP SP2 and Windows 2003 SP1 Microsoft tightened the bolts so now the programs on two different computers can't communicate that easily.
Either they both must be running under suitable user accounts so that uathentication succeeds or the RPC server must "open the hole" by calling RpcServerRegisterIf2() with RPC_IF_ALLOW_CALLBACKS_WITH_NO_AUTH flag to allow unathenticated calls as prior to the "tightening" change.
How safe is the second option? Will it really compromise the computer which is behind a corporate firewall?
Asking this here because it's a program design problem, not setup problem.
Not safe at all. Yes, it will really compromise the computer.
People who assume that attacks or malicious behaviour can only come from outside "the corporate firewall" will eventually be very disappointed :-)
I would never delegate the responsibility for securing my systems or applications to a party outside of my control. That's just asking for trouble.
I see this in the same area as people asking why they need constraints in their databases if their applications will always follow the rules, not realising that all it takes is some rogue with a JDBC driver and JRE, or even a buggy release of your application, to bring down the whole house of cards.
I would have thought that, in a corporate environment, all users would be centrally maintained anyway (e.g., AD) so that the issue with authentication would be minimal.

Is there any free tool for monitoring BizTalk applications remotely? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
whether command line or GUI, I'd be interested in testing every of them.
Your question is very generic and all the answers above assumed various things. When it comes to BizTalk monitoring its means different things to different people. Your BizTalk administrator might monitor the overall health of the BizTalk environment by opening the BizTalk Administration console. BizTalk Admin console allows adminstrators to deploy and mange BizTalk applications, in addition it also allows to monitor the health of the running systems. He/She can query for things like running instances (Orchestration, Messaging), suspended instances (resumable/non-resumable), Failed routing messages, failed subscription messages etc etc. BizTalk admin console can also be accessed remotely from a different machine if you have installed BizTalk Admin bits while installation via a MMC snap in.
Apart from this you also have HAT (Health and Activity Tracking in 2006, not in 2009 onwards), which allows you to do certain monitoring. But to access HAT you need to be on any one of the BizTalk machines.
Next comes BAM, which will require some custome configuration or in some cases some custom coding based on your requirements to capture some runtime monitoring data.
Next you got various performance counters, which will give you lot of statitical information like number of orchestrations running inside the host instance, spool size, number of messages received/send, etc etc.
I didn't find any necessity to go for a third party software for any of my monitoring requirements.
HTP
Saravana Kumar
BizTalk Server MVP.
If you want to monitor what a BizTalk application is doing, you should use Business Activity Monitor (BAM). BAM allows you to track fields from messages or context, and track milestone shapes in orchestrations. There's a BAM training kit here: http://msdn.microsoft.com/en-us/library/cc963995.aspx
you can always use the smtp adapter to send failed messages to yourself.
also performence counter is a great way to monitor biztalk - there is a lot of very useful data there.
BizMon
There is an new BizTalk monitoring tool called BizMon. You can check that out here. I think it does what you like.
We use this for our three mid-sized BizTalk environments (~50 BizTalk application in each) and it works good for us. But you can try it for yourself. The tool is free up to 5 applications (if you're however monitoring more applications than that you'll need a license).
FRENDS Helium
Another tool that might be worth a test is FRENDS Helium. I haven't tried this myself but they have a beta one can request and try out. Don't know anything about pricing or things like that though.
Do you mean monitor the status of each app? The only monitoring tools I know of are the ones from Microsoft here
If you want to monitor what the Biztalk app is doing, you'll need to put logging code into the app itself and then monitor the log (database table, event viewer, etc).
If you want to monitor the number of orchestrations being executed per second bu an application, or the number of messages going through a port, you can use Performance Monitor (perfmon). When you install BizTalk Server, a large number of new performance counters are installed.
If you want to be notified when a BizTalk application starts and stops, you can use WMI. Check into the sample WMI scripts included in the documentation for more info.
For performance monitoring, you can use PAL (http://www.codeplex.com/PAL). You can also use the Message Box Viewer to analyse the health of your system. And one other tool that I found recently and seem quite coold is the BizTalk Documenter (http://www.codeplex.com/BizTalkDocumenter). It is a must have in the tool box of any BizTalk developer.
Minotaur has gained a lot of ground in the past year as an effective BizTalk monitoring tool. It is easy to install and setup and inexpensive. Visit Raging Bull Tech's web site to investigate Minotaur as a fresh alternative to some of the product in the market today.
Minotaur V2.0 is set for release end of January 2011 and if feedback from the BETA testing is anything to go by, it is set to take the market by storm.
If you wish to put an end to your monitoring problems, go with the best in BizTalk monitoring out there, Minotaur.
You can take a look at http://sourceforge.com/projects/biztalkmonitord <- opensource FREE biztalk monitor! Including SMS warnings, and live feed monitor, works great for us!
I've its not the easiest to setup (but when its down nothing can compare!)
The best is that its multi -environment firendly
Monitor includes:
Specific fileshares
Suspended and active message in an environment
Suspended and active messages in an application
Receive Ports, Send Ports and Hosts + built in powershell commands to restart them!
Free space on fileshares!
cheers, and good luck!

How to comply with the new Federal Desktop Core Configuration (FDCC), which will remove local administrator access for all users? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
As developers, we believe that not having local administrative access is going to severely handicap our productivity. We will be restricted from running IIS (we’re a web development shop), installing applications, running Microsoft power tools, etc. If you’re going through the FDCC process now, it would be great to hear how you are coping with these changes.
Having actively worked as a contract developer at a base that uses the AF Standard Desktop, I can tell you a few things.
1: and most important. Don't fight it and don't do what the first person suggested "and let them choke on it". That is absolutely the wrong attitude. The military/government is fighting lack of funding, overstretched resources and a blossoming technology footprint that they don't understand. The steps they are taking may not be perfect, but they are under attack and we need to be helping, not hindering.
OK, that off my chest.
2: You need to look at creating (and I know this is hard with funding the way it is) a local development lab. Every base that I have worked at has an isolated network segement that you can get on that has external access, that is isolated from the main gov network. You basically have your work PC for e-mail, reports etc.. that is on the protected network. But, you develop in your small lab. I've had a lab be 2 PCs tucked under my desk that were going to be returned during a tech refresh. In other words, be creative with making yourself a development machine +servers that are NOT restricted. Those machines are just not allowed to be connected to the main lan segment.
3: Get the distributions of the desktop configurations. Part of your testing needs to be deploying/running on these configurations. Again, these configurations are not meant for development boxes. They are meant to be the machines the people use for day to day gov work.
4: If you are working on web solutions, be very aware of the restrictions on adding trusted sites, ActiveX components, certs, certain types of script execution that the configuration won't allow. Especially if you are trying to embed widgets/portlets/utils that require communications outside the deployed application domain.
5: Above all remember that very few of the people you work for understand the technology they are asking you to implement. They know they want function X but they want you to follow draconian security rule Y while achieving it. What that usually means is that the "grab some open source lib or plugin and go" is not an option. But, that is exactly what your managers think you are going to do because of the buzz around rapid development.
In summary, it's a mess out there. Try to help solve the problem.
While I've never been through the FDCC process, I once worked for a U.S. defense contractor who's policy was that no one had local administrative access to their machines. In addition, flash drives and CD-ROMs were disabled (if you wanted to listen to music on CDs, you had to have a personal CD player with headphones).
If you needed software installed you had to put in a work order. Someone would show up at your desk with the install media, login to a local admin account, and let you install the software (the reasoning being that you knew what to install better than they did). Surprisingly, the turnaround was pretty quick, usually around 1/2 an hour.
While an inconvenience, this policy didn't really cripple us. We were doing a combination of Java, C++ (MS Visual C++ and GNU/C++), VB 6.0 and some web development. For what little web development we did, we had a remote dev box we would RDP into for testing. Again, a bit of an inconvenience, but it didn't stop us from getting our jobs done.
Without ever having had the problem, today I'd probably try a virtualising solution to run these tools.
Or, as a friend of mine once opined: "Follow the process until They choke on it." In this case this'd probably mean calling the helpdesk each time you needed to have a modification to your local IIS config or you'd needed one of the powertools started.
From what I can tell FDCC is only intended to be a recommended security baseline. I'd give some push back on the privileges that you require and see what they can come up with to accommodate your request. Instead of saying I need to be a local administrator, I'd list the things that you need to be able to do and let them come up with a solution that works (which will likely to be to let you administer your machine or a VM). You need to be able to run the debugger in Visual Studio, run a local web server (Cassini), install patches/updates to your dev tools on your schedule, ...
I recently moved to a "semi-managed" environment with SCCM that gets patches installed on a regular basis from a local update repository. I was doing this myself, but this is marginally more efficient for the enterprise and it makes the security office happy. I did get them to put me, and the other developers, in a special collection so that we could block breaking changes if needed (how could IE7 be a security update?). Not much broke except that now I need to update Windows Defender manually since I updated it more frequently than they do in the managed collection! It wasn't as extreme as your case, obviously, but I think that is, in part, due to the fact that I was able to present the case for things that I needed to do for my job that required more local control.
From the NIST FAQ on Securing WinXP.
Should I make changes to the baseline settings? Given the wide
variation in operational and technical
considerations for operating any major
enterprise, it is appropriate that
some local changes will need to be
made to the baseline and the
associated settings (with hundreds of
settings, a myriad of applications,
and the variety of business functions
supported by Windows XP Systems, this
should be expected). Of course, use
caution and good judgment in making
changes to the security settings.
Always test the settings on a
carefully selected test machine first
and document the implemented settings.
This is quite common within financial institutions. I personally treat this as a game to see how much software I can run on my PC without any admin rights or sending requests to the support group.
So far I have done pretty well I have only sent one software install request which was for "Rational Software Architect" ('cos I need the plugins from the "official" release). Apart from that I have perl, php, python, apache all up and running. In addition I have jetty server, maven, winscp, putty, vim and a several other tools running quite happlily on my desktop.
So it shouldnt really bother you that much, and, even though I am one of the worst offenders when it comes to installing unofficial software I would recommend "no admin rights" to any shop remotly interested in securing their applications and networks.
One common practice is to give developers an "official" locked down PC on which they can run the official applications and do their eMail admin etc. and a bare bones development workstation to which they have admin rights.
Not having local administrative access to your workstation is a pain in the rear for sure. I had to deal with that while I was working for my university as a web developer in one of the academic departments. Every time I needed something installed such as Visual Studio or Dreamweaver I had to make a request to Computing Services.

Resources