How can I restrict access to a file on OSS while still allowing download using wget or curl - alibaba-cloud

I have created a large file (disk image) on OSS. I am able to download it with a browser using a hotlink (temporary time limited url). However I am unable to download to another server using wget and curl with the same url (error 403): "you have no right to access this object because of bucket acl".
In this answer: stop oss links from expiring it is stated that setting public or public-read ACL on the object or bucket is the only way to do this. If I do make it public, can I also set up whitelisting for the destination server to prevent others from downloading the file?

Based on this doc, you can use IP based bucket policy to restrict the access from certain IP addresses.

Related

error setting up LDAPS with AWS Managed AD - unable to download

I am trying to setup LDAPS with AWS Managed AD but am receiving an "unable to download" error when opening PKIVIEW. See screenshots below.
I granted Public Access to the bucket and folders but the URL would take me to S3 bucket properties tab for the bucket if logged in otherwise would take to me to an AWS login prompt.
I have reached step number 10 under "Step 4b: Configure Enterprise Subordinate CA" on the document listed on the AWS site in trying to setup LDAPS using AWS Managed AD. See link below.
https://aws.amazon.com/blogs/security/how-to-enable-ldaps-for-your-aws-microsoft-ad-directory/
This is the last action before Step 5.
For the record, I have set up exactly per instructions in this document. Both the RootCA and SubordinateCA have joined the domain and are in the same security group and subnet.
Any help would be greatly appreciated.
Thanks.
PS. I have also posted this question on the AWS forum
I managed to resolve this issue with a combination of two things
removed/reinstalled the cert services (so started from step 3 in the doc again) and this time around did not join the rootca to the domain - I misread this the first time around
changed the S3 URL paths to align with how they are noted in the doc (because there are a couple of difft ways in pathing the S3 URL). I then tested that I could browse and download each of the files using the S3 URL without logging into AWS and this worked.

How to Verify server to server communication

I'm having a few problems trying to decide what would be the best solution for something I'm trying to build.
In the applications simplest form, I have a front end server which allows users to upload files which become associated with their account, for example a video or image. The upload file form posts the upload request to the front end server, which then uses a reverse proxy to pass the request directly along to a storage server's API (https://www.example.com/users/username/upload).
What I'm currently stuck on, is trying to work out what the best way to verify that the request being received at the storage servers API is actually being sent from the reverse proxy from the front end server, as opposed to somebody just sending a direct post request to the storage server's API endpoint.
Any suggestions would be really appreciated!
There are multiple ways to do it:
you can use a API Gateway (e.g. APIGEE, AWS AI Gateway etc). Gateway can do request origin validation.
You can let front end app to use OAuth (for storage server) and use
that to get authenticated/authorized at storage server
You can do IP whitelisting between servers & allow a restricted set of IPs in source
You can use MASSL (Mutual Authenthicated SSL) b/w servers to make sure only clients which are verified access your API (may be not for your problem directly but can be used with combination)
These are the simple options if you don't need a complicated or more expensive solution.

Serve private mapping from S3 tiles by proxying data or signing urls through heroku?

I want to store mapping tiles in a private S3 bucket. Each tile has its own URL and each set of tiles could potentially have GBs of tiles.
I then want to visualise these tiles through a front end mapping client (e.g leaflet). This client pulls tiles as it needs them using the tile's individual URL.
Because the bucket is private I need to authenticate each tile request but performance is fairly critical for this application.
Given that I want to use heroku to host my site, is it better to proxy the url through heroku and get it signed before requesting the tile from S3 or proxy the tile itself through heroku?
Are there any other options?
If the content in S3 is private, you are going to have to authorize the download one way or another, unless the bucket policy allows the proxy to access the content without authentication based on its IP address. Even then, the proxy still needs to verify that the user is authorized via (presumably) a cookie, which might mean a session database lookup.
Generating a signed URL is not a particularly expensive process, computationally, and (contrary to the impression I occasionally encounter) the signing process is done entirely on your server -- there's no actual interaction with S3 that occurs when generating a signed URL.
There's not really a single correct answer. I use both approaches, and a combination of them -- signing URLs in the application, signing them in the database (I have written a MySQL stored function that signs URLs), providing a link to a different app server that reads the user's session cookie and, if authorized, generates a signed URL and returns a 302 redirect, providing a link to a proxy server that proxies pre-signed URL requests to S3 (for real-time logging and to allow me to use my own domain name and SSL cert)... there are valid use cases for all of these approaches, and others.
Ideally I think you want to proxy the requests through a server that is authorized to access the S3 bucket to minimize authentication transactions.
Whether it's on Heroku or not, as long as the proxy server is able to authenticate the end user's access and maintain that session according to the required security policies you should be fine.
Cesium does support Proxies for Imagery and Terrain so once that is in place you should just have to configure the CesiumProxy with your server and be good to go.

Why does FTP support Anonymous login?

I was wondering why FTP supports anonymous login ? Is it not a security issue that anyone can access files on a ftp server without having a real account ? And if anonymous account is a real good thing, what is its importance ?
If the publisher decided that the resource is public, anonymous access is perfectly valid. Take into account that FTP is just another network protocol as HTTP. If you are not scared about pubic http resources not sure why you should have any concern about FTP.
Historically FTP was widely used for placing files for public access, thus it had to support anonymous login. Note, that most servers don't support "no login" but require something like "Anonymous/guest" or "Anonymous/empty_password" login.
RFC 1635 describes "anonymous ftp" as follows:
Anonymous FTP is a means by which archive sites allow general access
to their archives of information. These sites create a special
account called "anonymous". User "anonymous" has limited access
rights to the archive host, as well as some operating restrictions.
In fact, the only operations allowed are logging in using FTP,
listing the contents of a limited set of directories, and retrieving
files. Some sites limit the contents of a directory listing an
anonymous user can see as well. Note that "anonymous" users are not
usually allowed to transfer files TO the archive site, but can only
retrieve files from such a site.
So, it's just a way to give the general public access to your server. To do this, you need to provide a username that everybody knows (i.e. 'anonymous') without a specific password (i.e. any e-mail address will do). But since everybody can access, you want to protect your content against changes, by enforcing heavy operating restrictions.

What is the best way to restrict access to a development website?

I have a site i am working on that i would like to display only to a few others for now. Is there anything wrong with setting up windows user names and using windows auth to prompt the user before getting into the development site?
There are several ways, with varying degrees of security:
Don't put it on the internet - put it on a private network, and use a VPN to access it
Restrict access with HTTP authentication (as you suggest). The downside to this is it can interfere with the actual site, if you are using HTTP auth, or some other type of authentication as part of the application.
Restrict access based on remote IP. Just allow the IPs of users you want to be able to access it.
Use a custom hostname. Have it on a public IP, but don't publish the hostname. This means make an entry in your HOSTS file (or configure your own DNS server, if possible) so that "blah.mysite.com" goes to the site, but that is not available on the internet. Obviously you'd only make the site accessible when using that hostname (and not the IP).
That depends on what you mean by "best": for example, do you mean "easiest" or "most secure"?
The best way might be to have it on a private network, which you attach to via VPN.
I do this frequently. I use Hamachi to allow them to access my dev box so they can see whats going on. they have access to it when they want , and/or when I allow. When they are done I evict them from my Hamachi network and change the password.
Hamachi is a software VPN. Heres a link to Hamachi - AKA LogMeIn
Hamachi
They have a free version which works quite well.
Of course, there's nothing wrong with Windows auth. There are couple of (not too big) drawbacks, though:
your website auth scheme is different from the final product.
you are giving them more access to the box they really need.
you automatically reimaging the machine and redeploying the website is more complex, as you have to automate the windows account creation.
I would suggest two alternatives:
to do whatever auth you plan on doing in the final website and make sure all pager require auth
do a token cookie based auth - send them a link that sets a particular token in a cookie and in your website code add quick check for that token before you even go to the regular user auth
If you aren't married to IIS, and you need developers to be able to change the content, I would consider Apache + SSL + WebDav (aka Web Folders). This will allow you to offer a secure sandbox where developers can change and view the content without having user accounts on the server.
This setup requires some knowledge of Apache so it only makes sense if you are already using Apache or if you frequently need to provide outsiders access to your web server.
First useful link I found on the topic: http://pascal.thivent.name/2007/08/howto-setup-apache-224-webdav-under.html
Why don't you just set up an NTFS user and assign it to the website (and remove anonymous access)

Resources