I have CloudFront setup which is setup to serve various subdomains under my domain.
ex.
http://demo1.mydomain.com/test.html, and http://demo2.mydomain.com/index.html can be two requests which are served by the CloudFront.
Now the issue here is with CloudFront caching. It caches the content based on path ie. in the above examples ("/test.html" and "/index.html"). This creates a problem that if two subdomains having same path, content which will get cached for a given path in one subdomain and will also be served from cache (same path) in other subdomain. ex.
http://demo1.mydomain.com/example.html
http://demo2.mydomain.com/example.html
The second request here will serve the cached content of first one.
Can I configure CloudFront to include subdomain when caching? This way I can avoid same path conflicts across subdomain.
Thanks
I had the same challenge, and solved it using headers.
In details:
We are indicating our tenants by subdomain id:
<id>.domain.com
We wanted to store a different cached value for each tenant. For example:
123.domain.com/get-config and 456.domain.com/get-config need to return different cached values.
As a solution, since cloudfront doesn't supply an indication based on sub-domains, we based on headers.
In your case, you can pass a header named appName and give it values: demo1, demo2 etc.
Cloudfront will host different cache values based on that header.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/header-caching.html#header-caching-web-selecting
Related
Frontend code is hosted on frontend.netlify.app and
Backend code is hosted on backend.herokudns.app
Is there a way to map a single host name, www.myapp.com to both frontend.netlify.app and backend.herokudns.app?
I can't achieve this on Google Domains using CNAME record - it only allows me to map www.myapp.com to either frontend.netlify.app or backend.herokudns.app.
Motivation
To make AJAX requests from my frontend without getting CORS error. Without solving this problem, I have to map something like api.myapp.com to backend.herokudns.app which creates an additional step of side-stepping CORS error since AJAX requests will be to a different domain.
There is no way to map a domain to multiple other domain names, AFAIK. I believe this is largely to avoid a situation where one domain name maps to a bunch of other domain names, that all map to other domain names, thus resulting in a DNS amplification attack for each DNS query for the original domain name.
I am currently trying to host a website as an experiment on Heroku, I deployed the back end which you can consider yyyy.herokuapp.com and the front end with you can consider xxxx.herokuapp.com,
Now, here's the issue, I need to set cookies between xxxx and yyyy, I know this will be a massive security issue but since this is an experimental website I am not willing to get a custom domain, I tried to set the cookies' domain to: herokuapp.com, .herokuapp.com, *.herokuapp.com, xxxx.herokuapp.com, yyyy.herokuapp.com.
Yet it doesn't work, chrome denies the cookies and gives this message:
This attempt to set a cookie via a Set-Cookie header was blocked because its Domain attribute was invalid with regards to the current host url.
So, how do I approach this issue without the need for a custom domain?
this is my configuration to set cookies (on the back end which uses flask)
response.set_cookie("example_cookie", value="cookie value",
max_age=900, expires=datetime.datetime.utcnow() +
datetime.timedelta(seconds=900), secure=True, domain=".herokuapp.com",
samesite='none')
If herokuapp.com were not a public suffix (a.k.a. an effective top-level domain or eTLD), then in the case of a cookie set by xxxx.herokuapp.com with Domain=herokuapp.com, browsers would send that cookie to yyyy.herokuapp.com
However, there is a snag: in order to isolate its different tenants, Heroku required herokuapp.com be added to the public-suffix list a while back. Most browsers refuse to set a cookie for a public suffix:
For security reasons, many user agents are configured to reject Domain attributes that correspond to “public suffixes”. For example, some user agents will reject Domain attributes of “com” or “co.uk”.
Therefore, attempts to set a cookie with Domain=herokuapp.com will be rejected by browsers, as you've experienced.
Note: adding a leading dot in the Domain attribute of the Set-Cookie HTTP header has no effect, at least in modern browsers.
To get out of this difficulty, you could simply buy a cheap domain name (say infinityvive.com) to serve both your frontend and backend from subdomains of it. Then you'd be able to use Domain=infinityvive.com because your domain would not be a public suffix.
I have 2 Asp.Net Core 2.2 applications and I want to share session between them.
I've set up session in a SQL database and both connect ok.
They are on different sub domains.
I understand that I can set the Cookie.Domain the startup file, which would solve the problem at a basic level, so each application would create the cookie such that it can be accessed.
e.g.
Domain 1. "www.website.com"
Domain 2. "dashboard.website.com"
At present these sites can't access each others session cookie.
If I set the domain cookie to ".website.com", both should be able to access this.
The problem is that we have multiple domains that use this website, so it could be:
www.domain1.com
dashboard.domain1.com
www.domain2.com
dashboard.domain2.com
www.domain3.com
dashboard.domain3.com
I need to be able to inject the current host name into the startup cookie domain, in order to have it dynamically set, depending on the domain of the active website.
Is this at all possible?
Thanks in advance,
David
No, it's not possible. Cookies are domain-bound. You can set a wildcard for the subdomain portion on the cookie, which would then allow it to be seen by example.com, www.example.com, foo.example.com, etc. but you can cannot share with an entirely different domain altogether, such as example2.com.
Your only option in this case is an Identity provider like IdentityServer, Auth0, Azure AD, etc. The way these work is that the auth cookie is set at the provider, and then each individual app is authorized against that provider. As such, they can receive the user principal from the provider, without having the actual auth cookie or their own login functionality.
UPDATE
If you just need to share between sites on the same primary domain, then follow the instructions in the docs. That's focused on auth cookies. If you need to share sessions as well, the same procedure applies, but you must additionally have a true distributed cache setup (Redis, SQL Server, etc.). There's a distributed memory cache, but that's just a default implementation, and it's not actually distributed.
I have an application built on laravel. I needed to enable https on my system and I used the cloudfront and Certificate Manager.
I was able to configure everything! Except that the laravel authentication system stopped working. Apparently the session in laravel does not work with cloudFront (CDN).
The system shows no errors. It simply does not authenticate the user.
I suspect the reason is the cloudFront. Because the cloudFront is between the browser and the EC2 server. Anyone know if there is a laravel authentication problem with cloudFront and Certificate Manager
my system: https://loja2.softshop.com.br/login
credentials:
login: teste#sandbox.pagseguro.com.br
password: tim140
the laravel validation also does not show the error messages.
For web distributions, you can choose whether you want CloudFront to forward cookies to your origin and to cache separate versions of your objects based on cookie values in viewer requests.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html
By default, no cookies are forwarded by CloudFront. Since most web sites providing any kind of dynamic content use cookies for managing state and authentication, the default configuration usually needs to be modified for dynamic sites.
Note the caveats on the same page of the documentation -- you generally only want to forward cookies to your origin on requests where the origin actually needs to them, so you may want to create separate Cache Behaviors without cookies enabled for static resources, in order to maintain a reasonable cache hit ratio for those static resources.
I'm storing some files for a website on S3. Currently, when a user needs a file, I create a signed url (query string authentication) that expires and send that to their browser. However they can then share this url with others before the expiration.
What I want is some sort of authentication that ensures that the url will only work from the authenticated users browser.
I have implemented a way to do this by using my server as a relay between amazon and the user, but would prefer to point the users directly to amazon.
Is there a way to have a session cookie of some sort created in the users browser, and then have amazon expect that session cookie before serving files?
That's not possible with S3 alone, but CloudFront provides this feature. Take a look at this chapter in the documentation: Using a Signed URL to Serve Private Content.