Jenkins stop serving static files after a couple of days - caching

I run Jenkins for my continuous integration and I have the following problem. After a couple of days Jenkins is running totally fine, the URL for static files stop being served and the CSS, JavaScript and global look of Jenkins looks broken when it actually runs the jobs as expected.
Any idea why?
Example of an URL:
http://myserver:8181/static/70f4ebef/css/style.css
Response:
HTTP ERROR 404
Problem accessing /static/70f4ebef/css/style.css. Reason:
Not Found
Powered by Jetty://
Calling http://myserver:8181/safeRestart fixes the problem so I'm wondering if it's a Jenkins issue or a Jetty/Jenkins cache conflict.
I run Jenkins 1.537.

It's happening because the static resources are unpacked in your /tmp directory and something else is cleaning up files older than x number of days old in there.
Refer to Jenkins issue 17526 for more info.

Related

Azure Cloud Services Classic - "Deployment could not be created"

We have a Cloud Service that we have been deploying/updating without issue. In the past two weeks every time we try to deploy the package we are getting the error "Deployment could not be created - There was an error processing your request. Try again in a few moments".
I am at a loss as to how to even debug the issue to get more detail. if anybody has any advice on how to get a better error description would be appreciated.
The only changes in this deployment are some changes to the static files in the package so it is unclear what is causing the issue. The process we use is (1) build the package, (2) upload the package, (3) deploy in the staging environment. The package gets uploaded but fails to deploy (step 3).
Any help as to what the issue is or how to get better diagnostic information woudl be great.

Deployment time of ~150MB war file from jenkins to tomcat 9

Our jenkins checks every 5 minutes if the SCM has changed and if yes, it starts a build with sonarqube, jacoco and finally a deployment on a tomcat over the internal network.
This works most of the times very good, sometimes I have to remove the old versions and restart tomcat.
What drives me almost mad is that it takes sooooooo long. I already observed that the ~140MB are completely uploaded but jenkins still waits for something. I adapted the values in the manager web.xml multipart config to 200MB each.
Any help or idea or question is very appreciated!
Thanks a lot!
Aurel
Using webhook (from Github, gitlab) instead of check git repo every 5 minutes. It will help you get changes from Git repo faster.
War file have been deploy many times, maybe you need to un-deloy app before copy new version of war file to re-deploy.
Tomcat does need a lot of permgen. Try to modify it. [Refer]

Critical Caching issue in Laravel Application (AWS Server)

I am facing a critical issue in my application, it is developed in Laravel and Angular. The issue is I am getting the old email templates on live site and on local server I am getting the latest updated one. The process to deloy the code is automatic, I just to commit the code in BitBucket and after that Bitbucket Pipleline push the code to AWS server directly.
I have already run the cache cammands for Laravel and restarted the jobs but still i am getting the same issue. If anyone have expirienced the same issue or have knowledge of the same to resolve, Please guide!
I think you can try one of the following ways to overcome the issue, I faced a similar issue and resolved it by following ways -
Try deleting the cache files manually from Laravel from storage/framework/views
Upload the code directly into AWS for particular module without using the pipeline way
restart your server
This will surely resolve your issue!
Since you are using Laravel and angular application deployed on AWS,
I assume that bit bucket is pushing code and build commands are fired on every push
there are few things which can help you.
Try to build the angular side on every push, since angular builds hashes all the files in the dist folder
Try to delete the Laravel cached files which are stored in storage/framework/views
Check that on that your server is pointing to the right project folder
If any of the points from 1 or 2 works you can automate the process by passing CLI command after every push,
Point 1 and 2 are achievable by passing CLI commands.

Jenkins does not show some builds log and failed builds in history

We have updated our Jenkins CI tool to updated version number is: 1.596.1. The problem is, for some projects, we cannot see the build console log. When we try to see the log, Jenkins responds with 404 error page.
Also, when this problem occurs in a particular job, we have examined that, failed builds are not be shown in the build history list in the job page. Even though we see that there is build from the gerrit history, when I click on the job with the specified ID, I again get the 404 error.
We are using Jenkins with Git/Gerrit, and most builds are triggered from the Gerrit review system.
New information: Jobs that have this problem have also the inconsistent timestamp problem.
It seems that this problem also occurs when Jenkins job is triggered by a patchset creation/merge in Gerrit review system.
What may be the root cause for this problem? Is it the version we're using or some other factor?
This is a persistent problem with Jenkins. A restart, which you can do from the front page when you're authenticated, is the fix. Visit
https://yoursite/jenkins/quietDown
to restart the instance.

OpenURI Open Throws 403 Forbidden

I've written code that allows users to search for specific images through the Google Image Search API and then downloads those images using Carrierwave's remote image functionality. We're getting bug reports though that certain URL's are throwing 403 Forbidden errors and we traced it back to
Kernel.open(url)
Scanning existing issues got me to "'open_http': 403 Forbidden (OpenURI::HTTPError) for the string “Steve_Jobs” but not for any other string" which suggests that the problem is due to the missing User-Agent and so we added this to our call.
Kernel.open(url, 'User-Agent' => "Ruby/#{RUBY_VERSION}")
This resolved the issue in our dev environment but it had no effect at all in our production environment. This is the most frustrating part. My production environment (running on AWS EC2 Ubuntu 12.04) fails everytime and for far more URL's than my dev environment (OSX 9.5). Both environments are running ruby 2.0.0-p353 and rails 4.0.5.
We've isolated several test URL's that we can consistently re-created this problem.
Example: http://www.lowes.com/images/LCI/Planning/HowTos/ht_BuildaHomePlayground_kit.jpg
I'm running out of ideas, but it seems to be something specific to the AWS box (since it works in dev) so is it possible that AWS is using some sort of outbound filter/proxy or that Ubuntu 12.04 has a known issue with OpenURI?
Scouring the internet is starting to run out of options.
UPDATE
I have two AWS instances running, that were supposed to be identical to each other, but upon closer examination, one is running GNU/Linux 3.2.0-58-virtual and the code above works properly (thats my staging environment), and the other is running GNU/Linux 3.2.0-68-virtual and the code above fails (thats my production environment). So the issue would seem to lie in whatever changed between 58 and 68.
For now, I'm switching my production and staging environments so that the issue is resolved, though it feels like a temporary and invalid fix since the staging environment is likely to be upgraded at some point thus landing us back at square one.

Resources