I'm writing a project with a lot of static files
Whenever I change a custom.js file, even thought it get changed both in
web/static/assets/custom.js and priv/static/assets/custom.js
When I try to reach the resource I get the old version, which is also most of the times corrupted
I tried:
restarting the server
running brunch build
removing the whole _build directory
changing the files further
clearing browsers cache
using url localhost:4000/assets/js/custom.js -H 'Pragma: no-cache'
Still the server serves old file
Edit
It seems to be an issue with mtime difference between vagrant VM and host.
So the real question is:
How to eliminate that issue?
Related
So I have an interesting issue that I just can't figure out why I'm getting this and what to do.
So basically I store all my development projects on my Synology NAS for local access between my various devices. There has never been a problem with this until I started playing around with Elixir and more importantly Phoenix. The issue I am getting is when running mix phx.server. I get the following
[warn] Phoenix is unable to create symlinks. Phoenix' code reloader will run considerably faster if symlinks are allowed. On Windows, the lack of symlinks may even cause empty assets to be served. Luckily, you can address this issue by starting your Windows terminal at least once with "Run as Administrator" and then running your Phoenix application.
[info] Running DiscussWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:4000 (http)
[error] Could not start node watcher because script "z:/elHP/assets/node_modules/webpack/bin/webpack.js" does not exist. Your Phoenix application is still running, however assets won't be compiled. You may fix this by running "cd assets && npm install".
[info] Access DiscussWeb.Endpoint at http://localhost:4000
So I tried as it stated and ran it in CMD as admin but to no avail. After some further inspection I tried to create the symlinks manually but every time I tried I would get a Access is denied. error (yes this is elevated CMD).
c:\> mklink "z:\elHP\deps\phoenix" "z:\elHP\assets\node_modules\phoenix"
Access is denied.
So I believe it is something to do with the fact that the symlinks are trying to be created on the NAS because if I move the project and host it locally it will work. Now I know what you're thinking. Yes, I could just store them locally on my PC but I like to have them available between PCs without having to transfer files or rely on git etc. (i.e. offline access), not to mention that the NAS has a full backup routine.
What I have tried:
Setting guest read write access on the SMB share
Adding to /etc/samba/smb.conf on my Synology NAS:
[global]
unix extensions = no
[share]
follow symlinks = yes
wide links = yes
Extra logging on SMB to see what is happening when I try it (nothing extra logged)
Creating a symbolic link from my MAC (works)
Setting all of fsutil behavior query SymlinkEvaluation to enabled
At the moment I am stuck and unsure of what to try next, or even if it is possible. Considering just using NFS instead but will I face the same issues with SMB?
P.S I faced a similar issue with Python venvs a while ago, just a straight-up Access is denied. error and just gave up and moved just the venv locally and kept the bulk of the code on the NAS. (This actually ended up beingthe best solution for that because the environments of each device on my network clashed etc.)
Any ideas are greatly appreciated.
I have the error "Mage registry key "_singleton/my_observer" already exists" error which is preventing me from clearing the cache and getting the site working again. I had originally accidentally added a duplicate my_observer class to my config.xml which is what caused the initial problem, and I since removed both instances completely, but I still get the same error. I have removed all instances and mentions of my_observer from the site, but the error still keeps popping up (I use phpstorm to search the entire project for any mention, and it found none).
I have tried flushing the cache through a shell command, but I only get the error 'php_network_getaddresses: getaddrinfo failed: Name or service not known'.
I have empties the var/cache folder and the var/session folder as well, to no avail.
I have cleared the cache in my browser, used another browser, and used incognito mode, all of which did not work either.
I know I basically need to flush the cache to make the site work again, but I basically can't flush the cache until I flush the cache.
Thank goodness I found an answer to my question. My cache is a Redis cache and I used the following commands to flush the cache through CLI (note, I had to install redis-tools in the CLI for this to work):
Pick one of the options below
redis-cli FLUSHDB
redis-cli -n DB_NUMBER FLUSHDB
redis-cli -n DB_NUMBER FLUSHDB ASYNC
redis-cli FLUSHALL
redis-cli FLUSHALL ASYNC
You can find your DB number in app/etc/local.xml.
Here is my source:
https://www.cyberciti.biz/faq/how-to-flush-redis-cache-and-delete-everything-using-the-cli/
I'm starting to develop a site in ubuntu 14.04. I'm using apache2, so I placed my files under /var/www/html/ folder. Everything was working fine, but I had to restart my current work so I copied the entire project folder from another path, like this
$ sudo cp ~/path/to/folder /var/www/html/
And now I can't see my images, a simple imgtag. I just see this.
When I look for the file, I can see that I'm having an error 403 forbidden. I saw some suggestion to add the option Require all granted but that option is already set in my apache. I suspect that is because the new folder is a copy from a folder without root permissions, so I tried chmod but that also didn't work so I'm completly lost now.
So, how can I see my images from localhost? and most important is why this happen suddenly?
I'm trying to set up a development server with PuPHPet, which is essentially just a pre-made build of Vagrant with PHP, Nginx and a few other things pre-installed.
I'm having a weird caching issue with my .css files.
When I access my .css file directly at my dev URL, it shows part of the file. This is the file as it was originally before I started editing it. You will notice from my screenshot that I've deleted the entire contents of the file and replaced it with the numbers "12345". When I refresh the .css file in my browser, I see the first 5 characters of the old file. Adding an extra character restores an additional character from the old file.
Restarting nginx does not clear the cache. Ctrl+F5 does not clear the cache. Checking the file contents from vagrant ssh:
[08:11 PM]-[vagrant#precise64]-[/var/www/public/css]-[hg default] B
B$ cat main.css
12345
I can see the file is up to date. The file it's partially displaying simply does not exist. My best guess is it's reading the length of the file on disk, and then pulling the actual contents from memory.
The built-in PHP 5.4 development server does not have this problem, so I'm pretty sure Nginx is the culprit.
How can I get Nginx to behave in a sane fashion?
Most probably it's this know VirtualBox bug with the sendfile system call.
Try disabling sendfile in nginx config:
sendfile off;
(In apache EnableSendfile off)
I am facing some issues when downloading large files in Heroku. I have to download and parse files greater than 1Gb. What I am trying to do right now, is use curl to download them into /tmp folder (of a Rails application).
The curl command is: "curl --retry 999 -o #{destination} #{uri} 2> /dev/null" and destination is Rails.root.join("tmp", "file.example")
The problem is that after a few minutes downloading, the "curl" process that is downloading the file is finished, way far from the download is finished. Before being finished, the logs show lots of "Memory exceeded". This led me to the thinking that when I am saving to /tmp folder, it is storing the downloaded content in the memory and when it memory hit its limit, the process is killed.
I would like to know if any of you have already experienced a similar issue on Heroku and if saving to /tmp folder really works like this. If so, do you have any suggestions to get this working at Heroku?
thanks,
Elvio
You are probably better off saving the file in an external cloud provider like S3 using the fog gem. In any case, Heroku is a read only filesystem, so they won't allow you to curl, must less write to it.