I have little cPanel web server that I offer as a convenient addition to my web development/design services to my clients.
All of them use roundcube when accessing their webmail so it would be good to skip cpanel's mail app selection page.
I know how to do it for a single account by creating subdomain webmail.domainname.com and redirecting it to roundcube, but I'd like to know if there's a way to do it for all accounts on a server, instead of doing it manually.
Related
When my users are using the AD Federation single sign on server, they must go to the full path of the login. Ex: https://login.example.com/adfs/fs/SingleSignOn.aspx
Instead I am wanting them to connect to https://login.example.com, which I am wanting to be the directory for the login page. I could either move the login path to be the index file for the subdomain, or I could simply have https://login.example.com/ redirect to https://login.example.com/adfs/fs/SingleSignOn.aspx.
Is this possible to do in any way only using AD Federation? I know using IIS would be a simple solution, but these machines are not supposed to be running IIS or anything similar.
I am very new to using AD Federation and have very limited experience, so any help would be greatly appreciated.
There is no way to do this with ADFS out of the box. You will need to put a proxy infront of ADFS with a rewrite rule. As you already guessed, this should not be run on the same server as ADFS, as they should never be directly exposed to the internet. It is recommend, at the minimum, to use a web application proxy anyway.
I have developed a web app that does its own user authentication and session management. I keep some data in Elasticsearch and now want to access it with Kibana.
Elasticsearch offers a RESTful web API without any authentication and Kibana is a purely browser side Javascript application that accesses Elasticsearch by direct AJAX calls. That is, there is no "Kibana server", just static HTML and Javascript.
My question is: How do I best implement common user sign on between the existing web app and Elasticsearch?
I am interested in specific Elasticsearch/Kibana solutions, but also in generic designs for single sign on to web apps and the external web APIs they use.
It seems the recommended way to secure Elasticsearch/Kibana is to have an Apache or Nginx reverse proxy in front that does SSL termination and user authentication (Basic auth). However, this doesn't play too well with the HTML form user authentication in my existing web app. Ideally I would like the user to sign on using the web app, and then be allowed direct access to the Elasticsearch API as well.
Solutions I've thought of so far:
Proxy everything in the web app: Have all calls go to the web app (server) which does the authentication, and have the web app issue the same request to the Elasticsearch web API and forward the response back to the browser.
Have the web app (server) store session info that Apache or Nginx somehow can look up and use to authorize access to the reverse proxy.
Ditch web app sign on and use basic auth for everything.
Note that this is a single installation, so I don't really need any federated SSO solutions.
My feeling is that the proxy within web app (#1) is a common generic solution, but it seems a bit heavyweight to have everything pass through the possibly slow web app, considering that Kibana uses the Elasticsearch API directly.
I haven't found an out of the box solution designed for the proxy authentication setup (#2). My idea is to have the web app store session info in memcache or the like and use some facility in the web server (Apache or Nginx) to look up the session based on a cookie and allow proxy access if authenticated.
The issue seems similar to serving static files directly using the web server (Apache or Nginx) while authenticating using a slow web app. Recommendations I've found for that are however very specific to that issue, like X-Sendfile.
You could use a sessionToken. This is a quite generic solution. Let me explain this. When the user logs in, you store a random string an pass him back to him. Each time the user tries to interact with your api you ask for the session Token you gave him. If it matches, you provide the service he is asking for, else, you just ignore his call. You should make session token expire in a certain interval of time and make a new one each time the user logs back in.
Hope this helps you.
Basically I want a lightweight CalDav server proxy, which passes the username, password and calendar name to a script and it will respond with either invalid user/pass, no such calendar or return the calendar.
The CalDav server would then return the appropriate response back to the server.
I will only have the calendars of the users stored locally on the server for caching purposes as I don't directly access to the users calendars. My script will try to login to an external site (out of my control in any way) and fetch the calendar by crawling the site.
If possible I would prefer if the server has wsgi support for communicating with my script.
I think your best bet here is to use sabre/dav and write a custom backend for it. As an example, at a company I used to work for I wrote a MongoDB backend for SabreDAV as well as getting the list of calendars from the system it was connected to. This is very similar to your use case, therefore check out this repository. You can find the backend implementation here and will need a lot of the other code to make the calendar listings work.
I would advise to do some caching and not scrape the remote site on each request, since caldav in connection with webdav-sync will want to provide updates since the last time the client synchronized, and that will be harder to do if you are scraping in the moment.
I have a REST web service running on a Windows 2003 Server. I want to prompt my users from a mobile app to enter their Windows domain credentials. I want to send those credentials to the web service, and cache them for a few days. It appears I can cache the credentials using the low-level Credentials Management functions but everything I've seen so far implies they're made to be called from an interactive session. What's the best way to cache these credentials in a web service?
MORE INFO: The reason why I need to cache the credentials in the Web Service is because I need them to access some back-end resources (i.e. SQL Server, etc.)
You don't typically cache things in a web service.
How are you prompting them to enter their credentials to begin with? That app / piece should cache the information.
It appears that Windows Identity Foundation provides a better mechanism to accomplish what I want. I'll be looking into that.
I have a site i am working on that i would like to display only to a few others for now. Is there anything wrong with setting up windows user names and using windows auth to prompt the user before getting into the development site?
There are several ways, with varying degrees of security:
Don't put it on the internet - put it on a private network, and use a VPN to access it
Restrict access with HTTP authentication (as you suggest). The downside to this is it can interfere with the actual site, if you are using HTTP auth, or some other type of authentication as part of the application.
Restrict access based on remote IP. Just allow the IPs of users you want to be able to access it.
Use a custom hostname. Have it on a public IP, but don't publish the hostname. This means make an entry in your HOSTS file (or configure your own DNS server, if possible) so that "blah.mysite.com" goes to the site, but that is not available on the internet. Obviously you'd only make the site accessible when using that hostname (and not the IP).
That depends on what you mean by "best": for example, do you mean "easiest" or "most secure"?
The best way might be to have it on a private network, which you attach to via VPN.
I do this frequently. I use Hamachi to allow them to access my dev box so they can see whats going on. they have access to it when they want , and/or when I allow. When they are done I evict them from my Hamachi network and change the password.
Hamachi is a software VPN. Heres a link to Hamachi - AKA LogMeIn
Hamachi
They have a free version which works quite well.
Of course, there's nothing wrong with Windows auth. There are couple of (not too big) drawbacks, though:
your website auth scheme is different from the final product.
you are giving them more access to the box they really need.
you automatically reimaging the machine and redeploying the website is more complex, as you have to automate the windows account creation.
I would suggest two alternatives:
to do whatever auth you plan on doing in the final website and make sure all pager require auth
do a token cookie based auth - send them a link that sets a particular token in a cookie and in your website code add quick check for that token before you even go to the regular user auth
If you aren't married to IIS, and you need developers to be able to change the content, I would consider Apache + SSL + WebDav (aka Web Folders). This will allow you to offer a secure sandbox where developers can change and view the content without having user accounts on the server.
This setup requires some knowledge of Apache so it only makes sense if you are already using Apache or if you frequently need to provide outsiders access to your web server.
First useful link I found on the topic: http://pascal.thivent.name/2007/08/howto-setup-apache-224-webdav-under.html
Why don't you just set up an NTFS user and assign it to the website (and remove anonymous access)