getting started with Single Sign On / Windows Authentication - windows

First off, The Problem:
We have a Web App with a Flash front-end that talks to our ASP.NET web service via SOAP which then deals with all of our server side code (C#).
Right now, we implement a simple user sign on in our application, storing the info in our MSSQL DB.
A client has requested what I understand to be Windows authentication through our application using the currently logged in user.
So, I have been tasked with investigating this. Nobody, including myself, has any experience in this area.
I have been reading up on some basic Active Directory information, and some simple tutorials. I understand how to get access to the directory using ADSI through code. What I'm really interested in seeing is how the entire thing should be architected. I don't want to throw together a hacky solution.
Does anyone know of a good tutorial for this kind of thing or have any advice on getting started? More importantly, does this even sound viable?
I know I haven't given much information, but feel free to ask and I will provide answers.
Thanks.
Edit:
Will, to give you an idea of the scope of this, the network will include every computer in a large hospital. So yes, this is huge. Clearly I need to start small. I would like to come up with something that will work at my office first. Maybe ~10 Windows computers on a single domain. One Domain Controller.
I am also open to any good books on the subject.

If you are going to tie into Active Directory you will want to take a look at the System.DirectoryServices namespace. The implementations can vary wildly depending on your system architecture, but this should give you a good starting point.
Enjoy!

Related

Automation layer above a site

I'm looking into creating a website that sits on top of another site. I wish for this site to be a sort of driver/auto-mater of the original site. The original site is slow and you need to input the same data repetitively (and lots of it - which is infuriating)
What would be the best way of doing this.
I have started using watir-webdriver in ruby, and it seems to work well! Would I be able to host this? I know it launches an explorer (fire-fox in my case) and my worry is not being able to host the application?
I don't want to place all my eggs into this one basket and find out later there's a stumbling block to getting it done!
The short answer
I think there are better tools for web scraping than web testing tools (watir and others), and your end result might require a lot more work than you imagine.
The long answer
This sounds like a case of the façade pattern in which your application would act as the new frontend and the old/existing site as the backend for the improved experience of the service.
Some things to think about before jumping into programming:
If the old site requires users to register, would your users be willing to re-register to your site so that you could log them in into the old site programmatically?
How frequently is the same data required to be inputted and how would you prevent it?
The existing site may have expectations on the request headers which might cause you extra headache and require quite some work to circumvent.
Are you allowed to use the existing site's user interface material or do you need to start from scratch?
How often is the existing site changed and how would it affect your application?
In summary, there are lots of factors and issues to take into account depending on how the existing site is implemented and who are your visioned users. Suggesting a best way to do it would require a lot more knowledge of both the existing site and how you'd want to improve it.
I haven't used watir-webdriver myself but if it is like Selenium and starts a new browser instance any time you run it, then hosting it would most likely not work as you'd expect. There are better tools for what you are thinking of doing, i.e. web scraping, and you may want to take a look at the following, for example:
https://www.ruby-toolbox.com/categories/Web_Content_Scrapers
https://www.ruby-toolbox.com/categories/http_clients

how does one identify why a website is slow? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was asked this question once at an interview:
"Suppose you own a website where the server is at some remote location. One day, some user calls/emails you saying the site is abominably slow. How would you identify why the site is slow? Also, when you check the website yourself as any user would (using your browser), the site behaves just fine."
I could think of only one thing (which was shot down):
Check the server logs to analyse incoming traffic. Maybe a DoS attack or exceptionally high traffic. Interviewer told me to assume the server has normal traffic and no DoS.
I was kind of lost because I had never thought of this problem. I have almost no idea how running a server/website works. So if someone could highlight a few approaches, it would be nice.
While googling around, I could find only this relevant, wonderful article. That article is kind of too technical for me now, but I'm slowly breaking it down and understanding it.
Since you already said when you check the site yourself the speed is fine, this means that (at least for the pages you checked) there is nothing wrong with the server and it can serve those pages at a good speed. What you should be figuring out at this point is what the difference is between you and the user that reports your site is slow. It might be a lot of different things:
Is the user using a slow network connection (mobile for example)?
Does the user experience the same problems with other websites hosted at the same webhoster? If so, this could indicate a network problem. Normally this could also indicate a resource problem at the webserver, but in that case the site would also be slow for you.
If neither of the above leads to an answer, you could assume that the connection to the server and the server itself are fine. This means the problem must be in the users device. Find out which browser/OS he uses and try to replicate the problem. If that fails find out if he uses any antivirus or similar software that might cause problems.
This is a great tool to find the speed of web pages and tells you what makes it slow: https://developers.google.com/speed/pagespeed/insights
I think one of the important thing that is missing from above answers is the server location, which can play a vital in web performance.
When someone is saying that it is taking a longer time to open a web page that means high latency. High latency can be caused due to server location.
Let's assume as you are the owner of the web page then the server and client are co-located, so it will have a low latency.
But, now if client is across the border, then latency time will increase drastically. And hence a slow perfomance.
Another factor is caching which drastically affects the latency time.
Taking the example of facebook, they have server all over the world to reduce the latency time (and also to provide several other advantages) and they use huge caching system to cache their hot data (trending topics) whereas cold data (old data) are stored in hard disk so it takes a longer time to load an older photo or post.
So, a user might would have complained about this as they were trying load up some cold data.
I can think of these few reasons (first two are already mentioned above):
High Latency due to location of client
Server memory might need to be increased
Number of service calls from the page.
If a service could be down at the time of complaint, it could prevent page from loading.
The server load might be too high at the time of the poor experience. The server might need to increase the resources (e.g. adding another server/web server to the cluster).
Check if there was any background job running on the server at that time.
It is important to check the logs and schedules of the batch jobs to determine what all was running at that time.
Hope this help.
Normally the user takes the page loading time as a measure to find out that the site is slow. But if you really want to know that what is taking the maximum time the you can open the browser debugger by pressing f12. if your browser is chrome the click on network and see what calls your application is making and which are taking maximum time. If you are using Firefox the you need to install firebug. If you have that, then again press f12 and click on Net.
One reason could be the role of the user is different of your role. You might be having suppose an administrator privilege (some thing like super user role) and the code might be just allowing everything for such role that means it does not really do much of conditional checking to see what is allowed or not. Some times, it's a considerable over ahead to get all the privileges of the user and have the conditions checking, how course depends how how the authorization is implemented. That means, the page might be really slow for specific roles. Hence, you should find out the roles of the user and see if that is a reason.
Obviously an issue with the connection of the person connecting to your site OR it's possible it was a temporary issue and by the time you checked your site, everything was dandy. You could check your logs or ask your host if there was an issue at the time the slow down occured.
This is usually a memory issue and it can be resolved by increasing the Heap Size of the Web Server hosting the application. In case the application is running on Weblogic Server. Heap size can be increased in "setEnv" file located in Application Home.
Goodluck!
Michael Orebe
Though your question is quite clear, web site optimisation is a very extensive subject.
The majority of the popular web developing frameworks are for some reason, extremely processor inefficient.
The old fashioned way of developing n-tier web applications is still very relevant and is still considered to be best practice according the W3C. If you take a little time to read the source code structure of the most popular web developing frameworks you will see that they run much more code at the server than is necessary.
This may seem a bit of a simple answer but, the less code you run at the server and the more code you run at the client the faster your servers will work.
Sometimes contrasting framework code against the old fashioned way is the best way to get an understanding of this. Here is a link to a fully working mini web application which represents W3C best practices and runs the minimum amount of code at the server and the maximum amount of code at the client: http://developersfound.com/W3C_MVC_EX.zip this codebases is also MVC compliant.
This codebase comes with a MySQL database dump, php and client side code. To see this code in action you will need to restore the SQL dump to a MySQL instance (sql dump came from MySQL 8 Community) and add the user and schema permissions that are found in the php file (conn_include.php); setting the user to have execute permissions on the schema.
If you contrast this code base against all of the most popular web frameworks, it will really open your eyes to just how inefficient these frameworks are. The popular PHP frameworks that claim to be MVC frameworks aren’t actually MVC compliant at all. This is because they rely on embedding PHP tags inside HTML tags or visa-versa (considered very bad practice according the W3C). Also most popular node frameworks run way more code at the server than is necessary. Embedded tags also stop asynchronous calls from working properly unless the framework supports AJAX dumps such as Yii 2.
Two of the most important rules to follow with MVC compliance is: never embed server side tags (such as PHP tags) in HTML tags or visa-versa (unless there is a very good excuse such as SEO) and religiously never write code to run at the server if it can be run at the client. Also true MVC is based on tier separation, where as the MVC frameworks are based on code separation. True MVC compliance is very processor efficient. Don’t get me wrong MVC frameworks are very useful for a lot of things, but if you’re developing a site that is going to get millions of hits, they are quite useless, or at least they will drive your cloud bills so high that it will really eat into your company’s profits.
In summary frameworks don’t give much control over what code runs at the client or server and are very inefficient but you can get prototypes up and running quicker with less code.
In contrast the old fashioned way takes a bit more elbow grease but you have complete control over what runs at the server and what runs at the client.
As an additional bit of advice for optimisation avoid using pass-through queries and triggers and instead opt for stored procedures. Historically stored procedures weren’t invented at the time MVC was present as a paradigm but it definitely increases separation of concerns between the tiers and is much more processor efficient.
Hope this advice helps.

Where to begin with SNMP agent implementation?

before I start I realise there are a few SNMP related questions here already but not many seem to have been answered - that could mean I'm asking in the wrong place but I don't know where else to go at the moment.
I've been reading up as best I can on SNMP for a couple of days but am finding it difficult to get my head around what is meant to be happening. The idea is eventually we will integrate SNMP into our Java application server which will allow the end users to incorporate it into their pre-existing Network Management Systems(NMS).
Unfortunately I'm feeling entirely confused by what is meant to be going on. From what I understood from talking to the end users (which was unfortunately before any research) was that the monitoring allows their existing NMS to give their admin guys a view of the vital statistics in a tree type display, giving them feedback regarding different parts of the system at a high level and allowing them to dig down into specific subsystems.
From reading around we would implement an 'Agent' which has several defined interfaces allowing for GET requests etc to be processed and responded to. That makes sense but I am at a loss to work out what the format of the communication is - there don't seem to be any specific examples of what any of the messages look like, how the information is encoded.
More of my confusion though is regarding Management Information Base(MIB). I had, wrongly, assumed that the interface of the agent would allow for the monitored attributes to be requested and then in turn the values for those attributes requested. Allowing any new Agent to be started and detected without any configuration on the NMS end (with the exception of authentication in v3). This, if I understand correctly, is not the case and the Agent must instead define MIBs which can be used by the NMS to determine those attributes. My confusion is increased when people start referring to thousands of existing MIBs and that they can be reused which I don't understand. Is the intention that a single MIB definition can be used to say describe how a particular attribute of a network device (something simple like internet connected on a router:yes/no) for many different devices? If so I don't believe that our software would allow the monitoring of anything common to any other device/system but should we be looking for already exising MIBs? At the moment I don't really see any good rational for such a system, surely it would be easier for the Agent to export that information - so I'd appreciate it if someone could enlighten me!
I think it would help if I was able to setup a simple SNMP agent and some sort of client, I could begin to see the process and eventually inspect the communication between the two but am finding it difficult to find anywhere that provides any information on doing such a thing. Nagios has been recommended to us as a test 'client'/NMS but their 'get started quick' section recommends downloading a 600Mb virtual machine - surely there is a quicker way to get started?
Any help or suggestions will be appreciated, I have been through the Wiki page but it doesn't seem to go into much detail about the MIBs and the having not had to deal with anything like the referenced RFCs before, while they may contain all of the information they seem completely impenetrable to me at the moment. Or if there are any books that can be recommended for an overview and implementation of v3?
Thanks for reading and even more thanks if you think you can help!
It seems to me that you read all SNMP information piece by piece in an disorganized way. This is highly not recommended and of course lead you to confusion.
What about forgetting what you have learnt so far and dive into a good book such as Essential SNMP?
http://shop.oreilly.com/product/9780596008406.do
Click the Google Preview icon to preview it please.
You could not depend on a network forum to tell you the ABCs, as that's impractical I find out.
The communications interface is SNMP. That's the protocol used for transmission (usually on top of UDP). The thing that services information requests is an SNMP Agent. The thing that sends information requests is an SNMP Manager.
The definition of what information should be made available by the Agent, and requested by the Manager, goes in a MIB. A MIB is the "glue", a directory of what sort of things any particular system can/should offer. It maps numeric codes to names and types that allow us to make sense of the data, much like how a phone directory maps phone numbers to people's names and addresses.
Generally you would create and ship and use your own MIBs that can describe aspects specific to your own product, but you are supposed to service some standard information requests as well, which are defined in existing MIBs. Yes there are thousands of other pre-existing MIBs and the likelihood that you need more than one or two of these is remote. They are typically published versions of MIBs for existing products.
The conventional way to "toy around" is to install Net-SNMP (a software suite that includes an agent implementation and allows you to "bolt on" your own logic and your own MIBs fairly easily) then examine the results using a packet capturer like Wireshark.
For a fuller implementation in production you may stick with Net-SNMP, or write your own Agent software, or do what I did and create a hybrid of the two that's a little more flexible and performant but uses Net-SNMP's backend for handling all the low-level SNMP stuff.
Your first step, though, is to read a book or some other teaching material that can clear all your misconceptions, because guesswork won't cut it.
I had success using the samples from this page. Both the shell and Perl NetSNMP code was very straightforward to implement and query.

Any way to use MvcMiniProfiler on windows application? Or is there a sister application?

So I've started using MvcMiniProfiler on our websites and quite like it. We have a Windows Application component/framework that is leveraged by the website and I was wondering if it was possible to use the profiler on that. I'm assuming not, but maybe there is a subcomponent of the code that could be used? I see that there is a way to configure where the results are stored (i.e. Sql Server) so maybe it is close to possible?
We have the following flow:
Website submits job to 'broker' then returns a 'come back later' page.
Broker runs and eventually data in the websites database gets updated by the broker.
Website displays the results.
I'd be great if there was a way I could get the entire workflow profiled. If there is no way/no intentions from the developers to make MvcMiniProfiler available to Windows applications, any recommendations for similar styled profilers?
You could get this working by using SqlServerStorage, there is very little in the code base that heavily depends on ASP.NET, in fact the SQL interceptor is generalized and so it the stack used to capture the traces.
I imagine that a few changes internally need to be made, eg: use Thread.SetData as opposed to HttpContext but they are pretty superficial.
The way you would get this going is by passing the "profiling identity" into the App and then continuing tracking there. Eventually when the user hits the site after it happens, it would show up as little "chiclets" on the left side.
A patch is totally welcome here but it is not something it does in its current version.
(note to future readers, this is probably going to be out of date at some point, if it is please suggest an edit)
Yes, there's a Windows porting of MiniProfiler: http://nootn.github.io/MiniProfiler.Windows/

Correlate application logs (Coldfusion/J2EE) with IIS logs to debug client network problems

I've seen some clients complaining about slowness of my website lately and I'm pretty sure that the problem is related to their network. I'd like to be able to justify this to myself more thoroughly and also be able to more proactively reach out to clients that appear to be having network issues before they come banging on my door.
If I was running ASP.Net I would try to use the Response.AppendToLog Method and append a token so that I could tie back everything back to my custom application level logging (user, client, processing time, etc.). I can't seem to find a way to do that without ASP.net. I'm guessing it's built into ASP's ISAPI. My requests are going through IIS to JRun's ISAPI to Coldfusion (.cfm/.cfc files).
I'm most interested in knowing how long it took the client to receive the content not just the time it took to process the request.
If there are other places/information that I'm not considering that's worth looking at, please let me know. Perhaps I should log information from HTTP.sys somehow?
I know that I could set a cookie on every request and have that logged by IIS, I was just hoping there would be a better solution.
Thanks for your thoughts!
See Jiffy. It "is an end-to-end real-world web page instrumentation and measurement suite."
The introductory video gives a good overview.

Resources