I made some changes to my origin server which now serves different data from same url.
I tried to clear my cache completely by doing the following invalidation in CF UI:
But this didn't work. How can I wipe off completely the Amazon CloudFront cache's in one go?
CloudFront does now support wildcard or full distribution invalidation. You will need do do one of the followng.
Invalidate each object that has changed
Invalidate /*
Version your objects so that they are considered new (Ie rename or querystring)
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html#invalidating-objects-console
You need to use /* instead of /.
Also, if you need to do this frequently, you can do it using the AWS CLI.
aws cloudfront create-invalidation --distribution-id=YOUR_DISTRIBUTION_ID --paths "/*"
Edit: thanks to #speckledcarp, you need to use "/*" (with quotes) when using the CLI.
According to AWS documentation you need to use /* instead of /
Related
I have Kibana plugin installed in each ES node. Kibana is behind nginx reverse proxy because it's served from /kibana/ route. Elastic is protected with SearchGuard plugin.
Question: History for dev tools/console is reset with each login (after each login, history is empty). Now, I'm not sure if I'm missing something or that's expected behaviour when SearchGuard is in use? I remember that worked well before installing SearchGuard. Not sure if it's coincidence or it's indeed related. It's saving properly during one session.
Elastic version: 6.1.3
Thank you!
It's stored in local storage under sense:editor_state in Chrome.
If it's wiped out daily or the cache is cleared, so will your searches be.
use ?load_from= in your url and save your queries in a json file... be aware of CORS if you use a web app of your own.
I have a node_modules cache in my Bibucket Pipeline and I added new module (eg yarn add react-modal) - how to make Bitbucket pipelines detect new yarn.lock and invalidate its cache?
Yeah, as Marecky have already mentioned, there is a ticket for that. Also, there is another one here https://jira.atlassian.com/browse/BCLOUD-17605, which should exactly address the issue. In short, there is an API to invalidate the cache, but it's currently reserved for internal use only.
Here is the official way to clear the caches
https://bitbucket.org/atlassian/bitbucket-clear-cache/src/master/
As I have read, Heroku recommends pointing your CDN directly at your dynos rather than using asset_sync, therefore I did this:
# config/environments/production.rb
config.action_controller.asset_host = "<MY DISTRIBUTION SUBDOMAIN>.cloudfront.net"
The origin of my distribution is dcaclab.com
The CNAMEs of my distribution is assets.dcaclab.com
So, after I successfully pushed my rails 4 app to heroku, I found that all my assets are served from my cloudfront's distribution, like:
http://<MY DISTRIBUTION SUBDOMAIN>.cloudfront.net/assets/features/Multiple%20Circuits-edbfca60b3a74e6e57943dc72745a98c.jpg
What I don't understand, is how my assets files got uploaded to the my cloudfront's distribution?! also, where I can find them?
I thought they would be uploaded to my s3 bucket assets but it was just empty.
Can some enlighten me please?
If your files are being served from your CloudFront distribution and they are not being pulled from your S3 bucket "assets" as you previously thought, then you probably have set up your CloudFront distribution to use a custom origin such as your web server (S3 buckets are the normal/standard origins).
If you have set up your CloudFront distribution to use your web server as the origin, then CloudFront will pull the files from your web server before caching them on different edge locations (the pulling and caching is automatically done by CloudFront when a user access resources through the "distributionname.cloudfront.net" domain).
By the way, just a side note, I ran "dig assets.dcaclab.com" which resolves to "assets.dcaclab.com.s3.amazonaws.com".
If I read the intro docs correctly; you don't necessarily see them on Cloudfront, at least not outside of the management console. They're cached on edge nodes, and requested from your Origin if they're not found or expired. They're "uploaded" on-demand; the edge requests the file from the origin if it doesn't have it.
Where would I specify that I want the uploaded objects to Amazon S3 via Fine Uploader, to use Reduced Redundancy?
Thank you.
Note that the ability to use reduced redundancy storage for Fine Uploader S3 will be part of the 4.0 release. A new boolean property (reducedRedundancy) will be added to the objectProperties option. This work is already completed in the develop branch in the Github repo. You can read more about this and see the commit(s) by looking at feature #1008.
My fantasy is to be able to spin up a standard AMI, load a tiny script and end up with a properly configured server instance.
Part of this is that I would like to have a PRIVATE yum repo in S3 that would contain some proprietary code.
It seems that S3 wants you to either be public or use AMZN's own special flavor of authentication.
Is there any way that I can use standard HTTPS + either Basic or Digest auth with S3? I'm talking about direct references to S3, not going through a web-server to get to S3.
If the answer is 'no', has anyone thought about adding AWS Auth support to yum?
The code in cgbystrom's git repo is an expression of intent rather than working code.
I've made a fork and gotten things working, at least for us, and would love for someone else to take over.
https://github.com/rmela/yum-s3-plugin
I'm not aware that you can use non-proprietary authentication with S3, however we accomplish a similar goal by mounting an EBS volume to our instances once they fire up. You can then access the EBS volume as if it were part of the local file system.
We can make changes to EBS as needed to keep it up to date (often updating it hourly). Each new instance that mounts the EBS volume gets the data current as of the mount time.
You can certainly use Amazon S3 to host a private Yum repository. Instead of fiddling with authentication, you could try a different route: limit access to your private S3 bucket by IP address. This is entirely supported, see the S3 documentation.
A second option is to use a Yum plug-in that provides the necessary authentication. Seems like someone already started working on such a plug-in: https://github.com/cgbystrom/yum-s3-plugin.