(yet another) rake assets:precompile is slow while deploying to Heroku - heroku

Just as many others here I am having issues with the pre-compilation of my assets when deploying to Heroku. I've tried nearly all of the suggestions found here, but to no avail. I am using Ruby 1.9.3 and turbo-sprockets-rails-3 and set config.assets.initialize_on_precompile = false.
I am precompiling multiple (5) .js and .css files. What I have noticed is that the pre-compilation of each file seems to be reasonably quick (between 0.2 and 25s per file), but the total time is around 6 mins. When watching the logs I can see each file getting pre-compiled quickly, but then it seems to be idling between each file for a long time before continuing. Or at least it seems that way, because there is no further output.
Any ideas of what might be going on or how I could get more detailed debug output to let me pinpoint the problem?

Do you sync to AWS S3 maybe, did you set correctly FOG_REGION ?

Related

Typescript compilation becomes slow - WebStorm

Since some days I see that the typescript compilation is becoming slower and slower. Compile a single file with webstorm file watcher took me around 1-4 sec, but I added many TS files (75 now, and I think that's not really a lot...) and the compilation time is about 10sec for a simple file.
If I need to change of branch or update a definition, it can become around 5 minutes. My computer is really powerful (game computer) and I don't get why it's becoming slow like that.
All files are compiled one by one, the webstorm way... And if the server is running at the same time with a watcher, it becomes crazy because restart maybe 50 times. (Obviously, I shut it down, but it's not really useful to have a watcher if I need to shut it down...)
Any idea? I took a look to several discussions like mine but so far I didn't really found any workaround.
You could point the file watcher to grunt task that compiles all your files in a single pass. E.g. grunt-ts : https://github.com/basarat/grunt-ts compiles all your files in one command to tsc.
PS disclosure: I am one of the authors of grunt-ts

Compass watch hangs

I've been successfully using compass watch on the command line to automatically recompile my CSS whenever an SCSS file changes.
Now, it is hanging my system. It is taking up more than 10 GB of RAM. I've tried rolling back to earlier commits in my repo, and the problem continues. Thoughts?
Make sure your paths are correct and your files are correct. Sometimes corrupted images may cause this problem.

Rails 3.1 production assets : big files are cut into pieces

Since I use Rails 3.1 and the assets pipeline I have a big problem when in production mode.
When I did a bundle exec rake assets:precompile, I had errors like
'myjsfile.js' has a invalid UTF-8 byte sequence
in particular with tiny_mce plugin js files.
So I gave up, as everything was working ok in developpment on my mac, and I wrote this line in the production.rb file: config.assets.compile = true
The JS files are therefore generated without errors.
On the other hand, there is a big problem with long files, like jquery.js
Regularly, Rails generates only half of the jQuery file, and the only manner I found to fix temporarily the problem is to go on the js file's URL (http://myapp.com/assets/jquery.js) and refresh the page many times, and after a while the jquery file is entire again.
Then a few days later the problem is back and I've got to do this again.
In fact, it's as if during the compilation of big files, the process stopped in the middle and that the server sent the file half compiled.
Have some of you had this problem? Any idea from where it could come?
I use nginx and passenger on an Ubuntu server for production. I never encounter this problem on my Mac.
In advance, thank's for your help !
The pre-compilation process will fail if you run out of memory on your server. Try doing a rake assets:precompile on your mac and committing the generated assets so that you can get them on your server.
In the longer run, run the precompile on an intermediate CI server, for every successful build.

magento - $ is not a function. But only on local dev server

I took a backup of my live Magento site yesterday (zipped up the files and took a DB dump then created the site from those dumps).
Oddly though, on my local machine I get a firebug error that states "$ is not a function" and this error occurs every 500ms or so. So after a minute or 2 I have thousands of errors in the console all the same.
The site is an exact replica of my live site and I don't get the error on that so I'm stumped!
Usually I would think this is a prototype/jquery conflict, but it only seems to happen on my local machine.
Any one have a clue what might be going on?
Thanks
Load a page where you see the error.
View the source of the page.
Find the line that's supposed to load prototype.js by searching for the string prototype.js.
ex. http://magento.example.com/js/prototype/prototype.js
Discover that, for one of myriad reasons, the file isn't loading. (wrong URL, permissions, corrupt file, etc.)
Address problem discovered above.
Ok so this was the problem:
The reason it worked on live and not dev was because I had merge JS enabled on live and not on dev. Live was therefore looking at an old cached bunch of js. Disabling merge js on live highlighted that the problem did in fact occur on the live site.
This knowledge allowed me to debug further and I discovered that the problem lay with my jquery.hove.intent.js file. I updated this to the latest version and it solved everything! :)
Thanks all for your help and input though.

Cucumber parse speed

We have been using Cucumber for some time now, and now have over 200 scenarios. Our startup speed is getting very slow, which makes a big difference in our edit-test-commit cycle. The problem seems to be the parsing of the feature files. Is there a way we can speed this up?
NOTE: We are using IronRuby, which has a known slow startup time. However, this startup time (about 30 seconds) is small compared to the time spent parsing (2-3 minutes) which we can see because of the side-effects of our env.rb code.
EDIT: Running only specific tags doesn't help reduce parse time because Cucumber still has to parse all the files to read the tags in the first place.
It is possible to run only feature files in a specific directory by passing the directory in to cucumber. This causes only the features under that directory to run, and more importantly, it doesn't parse anything in other directories. So one can reduce run time by organizing feature files into directories and only running the relevant feature directory.
You could just test the scenarios you're working with at the moment. If you set the tag #wip (word in progress) before a Scenario and run 'rake cucumber:wip' you will only run the scenarios that contain the tag #wip

Resources