Ehcache causing Tomcat 7 to give 404 - tomcat7

I have a Lift-based REST application. My dev environment is running the Jetty server built into SBT, and the application is deployed on Tomcat 7.
I recently integrated Ehcache to my setup, discarding a custom cache. It works flawlessly in my Jetty dev server. When I deploy to tomcat however, any URL's served by my Lift application get 404's. There are no exceptions, and no log items as far as I can find that are relevant, aside from the 404 in the log.
When I comment out the lines referring to Ehcache and remove it as a dependency, Tomcat fires up and works fine and serves those urls. As soon as I uncomment Ehcache, it's all 404's again.
Does anyone know what is going on? I know Ehcache uses SLF4J to log, is that somehow stopping ehcache errors from appearing in my logs?
Relevant section of build.sbt:
Seq(
"net.liftweb" %% "lift-webkit" % liftVersion % "compile",
"net.liftweb" %% "lift-mapper" % liftVersion % "compile",
"com.typesafe.slick" %% "slick" % "2.0.0-M3",
"org.eclipse.jetty" % "jetty-webapp" % "8.1.7.v20120910" % "container,test",
"org.eclipse.jetty.orbit" % "javax.servlet" % "3.0.0.v201112011016" % "container,test" artifacts Artifact("javax.servlet", "jar", "jar"),
"ch.qos.logback" % "logback-classic" % "1.0.6",
"org.specs2" %% "specs2" % "1.14" % "test",
"mysql" % "mysql-connector-java" % "5.1.25",
"net.sf.ehcache" % "ehcache" % "2.8.2",
"javax.transaction" % "transaction-api" % "1.1",
"org.slf4j" % "slf4j-simple" % "1.7.7"
)
Relevant seciont of ehcache.xml
<cache name="mycache"
maxEntriesLocalHeap="10000"
maxEntriesLocalDisk="1000"
eternal="false"
diskSpoolBufferSizeMB="20"
timeToIdleSeconds="21600"
timeToLiveSeconds="43200"
memoryStoreEvictionPolicy="LFU"
transactionalMode="off">
</cache>
Any help would be greatly appreciated. Thanks in advance.

Finally solved this. It was two separate problems. First, there was a bug (or multiple) with Tomcat 7.0.42 I was running in production. EhCache was failing silently.
In a separate installation of 7.0.52, It was throwing an Exception, saying it was unable to locate my ehcache.xml file. Moving the file to WEB-INF/classes/ solved that exception and the application spun up. Moving back to my 7.0.42 installation however, it still would not fire up.
After forcing my production installation to 7.0.52, the application fires up and works fine.
So in summary, I had two problems:
Ehcache was unable to find my config file
Tomcat 7.0.42 seemed to have a bug
Solutions:
Move ehcache.xml to WEB-INF/classes
Upgrade Tomcat from 7.0.42 -> 7.0.52

Related

Tomcat errors when upgrading Java from 1.8 to 1.9

I currently have an Apache Tomcat (9.0.43) server running on jre1.8.0_261. For various reasons, I'm trying to run the same code on jre-9.0.4. When I try this, I get the following error in Apache's localhost.date.log file:
java.lang.NoClassDefFoundError: Could not initialize class customName.customConfig.User
where customName and customConfig are parts of a user-defined class (which we wrote) that has been successfully running on Tomcat for years, and continues to work just fine on jre1.8. It's only when I try to run with jre-9.0.4 that I get the error.
Is there a way to get jre-9 to 'recognize' the class, for lack of a better word? I know where to find its .class file (and so does jre-1.8).

Camel validate against file from classpath

I want to validate xml against schema - using SpringBoot 2 and camel 3.
On localhost this works:
.to("validator:file:src/main/resources/D.xsd")
But when uploaded to server on tomcat machine with context for example D - i get an error:
Caused by: java.io.FileNotFoundException: src/main/resources/D.xsd (No such file or directory)
I think that i need to change the path to use classpath - but i am not sure how to make it work ?
What i tried:
.to("validator:file:classpath:/src/main/resources/D.xsd")
.to("validator:file:resource:classpath:src/main/resources/D.xsd")
.to("validator:file:resource:classpath:/src/main/resources/D.xsd")
But it does not work.
In one of my applications (but with SpringBoot 1.5 and Camel 2.x) this works fine
.to("validator:classpath:folder/filename.xsd")
to validate against filename.xsd that is located in src/main/resources/folder
I've managed to fix it with:
.to("validator:D.xsd")

Mule 3.3 How to configure a cxf module at run time

I want to know if it is possible to configure a cxf element at runtime , during running mule.
for example how it is possible to set "address" and "wsdlLocation"
Both are not require ... either you can give wsdl location or address .. eg :-
< cxf:jaxws-service wsdlLocation="wsdl/helloservice.wsdl" serviceClass="com.myapp.serviceA.IServiceA" doc:name="SOAP"/ >

Weblogic caching problems

I'm writing a WLST script to deploy an application with WebLogic 11g. The problem is that when I deploy an application (version A), undeploy it, then deploy version B, it deploys version A.
If I try to solve this by deleting the tmp/_WL_user/appname/ folder, it then won't deploy A or B because it looks in the tmp folder for the application (and fails because I cleared it out). I'm using the nostage option, so I don't understand why it's caching anything.
Any help you can offer would be greatly appreciated. Thanks!
Probably the undeploy of Version A was not successful and Version B was never deployed.
Not sure what you have in the WLST script, could you try with the following:
# let's say the appName is testApp
# can move all of these properties to a props file
appName='testApp'
appPath='/scratch/user/testApp.war'
appTarget='AdminServer'
username='weblogic'
password='weblogic1'
adminURL='t3://hostname:adminport'
# start deploy/undeploy code
connect (username, password, adminURL)
for app in cmo.getAppDeployments():
currAppName = app.getName()
if currAppName == appName :
print('Application' + appName + ' already exists, undeploying...')
undeploy(appName)
# sleep is just to make sure that we don't attempt deploy immediately i.e before server is finished with undeploying
# more like a safe side one, may not be required also
java.lang.Thread.sleep(60000)
print('Now deploying ' + appName)
deploy(appName, appPath, appTarget)
disconnect()

Job handler serialization incorrect when running delayed_job in production with Thin or Unicorn

I recently brought delayed_job into my Rails 3.1.3 app. In development
everything is fine. I even staged my DJ release on the same VPS as my
production app using the same production application server (Thin),
and everything was fine. Once I released to production, however, all
hell broke loose: none of the jobs were entered into the jobs table
correctly, and I started seeing the following in the logs for all
processed jobs:
2012-02-18T14:41:51-0600: [Worker(delayed_job host:hope pid:12965)]
NilClass# completed after 0.0151
2012-02-18T14:41:51-0600: [Worker(delayed_job host:hope pid:12965)] 1
jobs processed at 15.9666 j/s, 0 failed ...
NilClass and no method name? Certainly not correct. So I looked at the
serialized handler on the job in the DB and saw:
"--- !ruby/object:Delayed::PerformableMethod\nattributes:\n id: 13\n
event_id: 26\n name: memememe\n api_key: !!null \n"
No indication of a class or method name. And when I load the YAML into
an object and call #object on the resulting PerformableMethod I get
nil. For kicks I then fired up the console on the broken production
app and delayed the same job. This time the handler looked like:
"--- !ruby/object:Delayed::PerformableMethod\nobject: !ruby/
ActiveRecord:Domain\n attributes:\n id: 13\n event_id: 26\n
name: memememe\n api_key: !!null \nmethod_name: :create_a\nargs: []
\n"
And sure enough, that job runs fine. Puzzled, I then recalled reading
something about DJ not playing nice with Thin. So, I tried Unicorn and
was sad to see the same result. Hours of research later and I think
this has something to do with how the app server is loading the YAML
libraries Psych and Syck and DJ's interaction with them. I cannot,
however, pin down exactly what is wrong.
Note that I'm running delayed_job 3.0.1 official, but have tried upgrading to
the master branch and have even tried downgrading to 2.1.4.
Here are some notable differences between my stage and production
setups:
In stage I run 1 Thin server on a TCP port -- no web proxy in front
In production I run 2+ Thin servers and proxy to them with Nginx.
They talk over a UNIX socket
When I tried unicorn it was 1 app server proxied to by Nginx over a
UNIX socket
Could the web proxying/Nginx have something to do with it? Please, any insight is greatly appreciated. I've spent a lot of time
integrating delayed_job and would hate to have to shelve the work or, worse,
toss it. Thanks for reading.
I fixed this by not using #delay. Instead I replaced all of my "model.delay.method" code with custom jobs. Doing so works like a charm, and is ultimately more flexible. This fix works fine with Thin. I haven't tested with Unicorn.
I'm running into a similar problem with rails 3.0.10 and dj 2.1.4, it's most certainly a different yaml library being loaded when running from console vs from the app server; thin, unicorn, nginx. I'll share any solution I come up with
Ok so removing these lines from config/boot.rb fixed this issue for me.
require 'yaml'
YAML::ENGINE.yamler = 'syck'
This had been placed there to fix an YAML parsing error, forcing YAML to use 'syck'. Removing this required me to fix the underlying issues with the .yml files. More on this here
Now my delayed job record handlers match between those created via the server (unicorn in my case) and the console. Both my server and delayed job workers are kicked off within bundler
Unicorn
cd #{rails_root} && bundle exec unicorn_rails -c #{rails_root}/config/unicorn.rb -E #{rails_env} -D"
DJ
export LANG=en_US.utf8; export GEM_HOME=/data/reception/current/vendor/bundle/ruby/1.9.1; cd #{rail
s_root}; /usr/bin/ruby1.9.1 /data/reception/current/script/delayed_job start staging

Resources