I've been looking around for a way to run a titan server in tomcat, but I can't find any information about this.
Anyone that knows how this can be done?
Since you are asking about running "Titan Server" in Tomcat, that really just means how to run Rexster inside of Tomcat. We dropped official support for Tomcat many, many versions ago, but I believe there are still those have it deployed that way which means it is in fact possible. I guess this would also only apply to hosting the Jersey-based REST endpoints and not RexPro.
To get started I would simplify the stack and just get Rexster running in Tomcat. I would search around the gremlin-users mailing list for what people have posted on the topic, but I think that this one is the most relevant:
https://groups.google.com/forum/#!msg/gremlin-users/s0g9Sd_xjSw/LQ3_ugL680cJ
If I remember correctly the key to making things work lies in this Rexster class: RexsterApplicationProvider. Note the class comments with the sample web.xml fragment.
I suspect you just to fire up an instance of Titan with Cassandra etc when Tomcat starts?
If this is the case you can make a InitialListener in your web.xml that starts a Singleton or service that starts up the titan graph connection and then you can use it in your other servlets/whatever code base you have running.
Related
What is the actual benefit of restarting a container when updating its configuration instead of updating the configuration at runtime (e.g. Spring Boot supports listening to ConfigMap changes or Spring Cloud Config Server has this feature)? I can actually see none and I can see some drawbacks such as a need to reset TCP connections.
Unlike Spring Boot, other stacks such as Node.js, Go or Rust actually don't have as big overhead when booting up. The problem with Spring Boot is that it just takes longer than any other "modern" stacks to start up because it's booting up JVM and Tomcat. Those two technologies were here well before Docker and Kubernetes were a thing and honestly, that's the price to pay to run Spring Boot in containers.
And what's the benefit? If you're a single developer, probably none. If you work in a team and everybody tinkers with live ConfigMaps and environment variables, it can get hairy really quickly.
Assuming you're using for example Terraform to manage your configurations then everybody gets a nice overview of what is going on and which values are injected where.
I have a spring boot application that currently runs in embedded Tomcat. I have a file, states.csv, that I want to parse on startup and seed my states database table (I tried via liquibase but that refuses to work).
I put the file in resources/main/ and that appears to work fine. My question is, if I decided against embedded Tomcat in the future (say moving to AWS or a regular Tomcat), is this still the best location to keep files for use?
I don't want to code myself into a corner if there is a better way to do this.
This depends entirely on how you're reading the file. As long as you're grabbing it out of the classpath, you should be fine. (And I've run single-jar applications on both basic AWS VMs and Cloud Foundry on EC2 with no difficulty at all.)
I have the task of migrating our webapp from grand old Spring 2.X (mvc, security, webflow etc.) to Spring 3.X, which is quite some work but actually pretty straight-forward. Now I ran into a few problems regarding our closed-source, commercial piece of community-functionality, which apparently is built against Spring 2.X. I tried decompiling parts of it, recompiling with updated Spring packages etc., but it's very cumbersome and after a day of work I stopped since this does not seem to be the right direction I'm headin.
Is it possible to strip apart those parts of the whole webapp alongside the commercial stuff (which unfortunately cannot be exchanged just like that, it'd be a rather large project) and weave a new 'legacy'-webapp that's running Spring 2.X, while having the larger part migrated to Spring 3.X - and everything up and running in the same container, hopping to and fro?
so e.g. when accessing URLs like /account/overview.htm the legacy-webapp is accessed while the rest is being served from the brand-new one??
If this sounds like too far ahead, I'm open for alternatives....
thanks beforehand!
Cheers
I guess the answer is both YES/NO:
of course you can run two different war in the same container regardless of the libraries they use
you must be aware of the two webapps holding two different contexts (beans, authentication, ...)
I have a web app running on Tomcat 6.0.35, which makes use of Spring 3.1.2, Hibernate 4.1.8 and MySQL Connector 5.1.21.
I have been trying to figure out what is causing Tomcat to keep running out of memory (Perm Gen) after a few redeploys.
Note: Don't tell me to increase Tomcat's JVM memory because that will simply postpone, the problem
Specifically, I made use of the VisualVM tool, and was able to eliminate some problems, including some mysql and google threads issues. I was also able to discover and fix a problem caused by using Velocity as a singleton in the web app, and also not closing at the correct time/place some thread local variables I was having. But I still am not completely able to eliminate/figure out this Hibernate issue.
Here is what I'm doing:
Deploy my webapp from my development IDE
Open a tomcat manager window in my browser
Start VisualVM and get the HeapDump on the tomcat instance
Go the tomcat manager and redeploy my webapp
Take another HeapDump in VisualVM
My first observation is that the WebappClassLoader for the original webapp is not garbage collected.
When I scrutinize the retained objects from the second HeapDump, the class org.hibernate.internal.SessionFactoryImpl features prominently which leads me to believe that it IS NOT being destroyed/closed by Spring or something along those lines (and hence the WebappClassLoader still having a reference to it).
Has anyone encountered this problem and identified the correct fix for it?
I don't currently have an idea what could be amiss in your setup but what I know is that using Plumbr you'll most likely find the actual leak(s).
This is cross posted from the fusesource forum and the servicemmix forum.
I can't get DOSGi working in FUSE. I'm trying to get CXF's DOSGi 1.1-SNAPSHOT with Zookeeper discovery onto FUSE 4.1.0.2. I'm also using Zookeepr 3.2.1.
Everything works perfectly on Felix 2.0.0. I just follow the instructions on the DOSGi Discovery page and then install the Discovery Demo bundles. For DOSGi, I just use the cxf-dosgi-ri-singlebundle-distribution-1.1-SNAPSHOT.jar for DSW and cxf-dosgi-ri-discovery-singlebundle-distribution-1.1-SNAPSHOT.jar for zookeepr discovery. Then when I start the sample bundles with the sample service impl on one machine, I see the node creation in zookeeper. Then I start the sample client on another machine and I see the output on the service machine. Works great. I do have an warning about an xml error being ignored because some XSD coudln't be found, but it doesn't seem to affect anything. Oh, I also have to install the OSGi compendium bundle first.
When I move to Fuse, I have no such luck. The OSGi compendium bundle comes with fuse, so no need to install that. I should just be able to install the dosgi-ri singlebundle, and the dosgi-ri-discovery single bundle, but that doesn't work. The dosgi-ri singlebundle has all kinds of overlapping bundles with servicemix. I get an error about port 8081? or whatever the osgi.http.service parameter is, being already in use. Apparently the dosgi-ri singlebundle comes with pax webservice, which reads the same property as the servicemix http service bundle that comes with servicemix. Thats when I switch to the cxf-dosgi-ri-multibundle-distribution-1.1-SNAPSHOT.zip and unzip it to take the parts I want. I take the dsw bundle out of the dosgi-ri multibundle and install that. No luck because of the jdom dependency. Then I install the jdom that comes in the ri multibundle, which works fine. Then go back to dsw, and that installs, so I think i'm getting somewhere. Time to go back and install the ri-discovery singlebundle. When I start that I get a pax logging service classcastexception saying it can't be cast to a osgi logservice or something. But thats just a logging error, and at the bottom it says it can't find the transport class for http://schemas.xmlsoap.org/soap/http. Ok, so logging is screwed up and I'm missing some transport class. Well, clearly this comes from not installing enough from the ri multibundle because it worked on felix. So what else in there is necessary. The cxf-minimal-bundle upon inspection has the missing class causing that last error. So I install that. Try to start the discovery bundle, but I end up with some kind of corbabroker exception. Wtf. Whose using corba in all of this? Then I go back and undo all of that and try to stick with the singlebundle distros of ri and ri-discovery, but just turn off the servicemix http service. That crashes servicemix and I can't restart it becauuse the cxf jbi components end up with an unsatisfied dependency. Odd. I'll just ignore that because I don't use those anyway, and try to start my samples. Can't start the samples because it says jetty can't start because the ports already in use. Doesn't make sense because I shutdown the servicemix http service already. Then I restart jetty. Works? Maybe. My service gets registered and I can browse to the wsdl using firefox, but no registration in zookeeper. Try to shutdown the ri-discovery bundle and restart it, but I get a nullpointerexception. Appparently the ri-discovery never actually started up due to one of the aforementioned errors. Then I started trying to take apart the ri-discovery singlebundle and pull out the internals. That didn't work because its all apparently necessary, even though theres some libs inside we could do without.
End of the story. Can't get it to work. Can anybody else get it to work? I just want to run the discovery samples in SMX4. I'm pretty sure its just a bundle conflict problem. Isn't this what OSGi is supposed to fix??? This is worse than just telling me what jars you depend on and making me setup my classpath. At least then I'd eventually get the thing running.
My next steps, I think, will be to try again with the ri-multibundle, just the dsw and jdom, plus the ri-discovery singlebundle. Then I'll try some of the cxf-fuse bundles or some of the cxf-rt bundles to get around the soap transport issue.
Edit notes: I need more than just showing the DOSGi bundles in an Active state. They don't actually do much until you try to expose a service through them. I do need to see multiple machines registering services with a zookeeper instance and other machines consuming those services -- just like the running DOSGi Discovery Sample.
I've been able to get cxf to expose the distributed service sample as a soap webservice by using the minimal cxf bundle mentioned by either removing parts of the original cxf bundles and restarting the jetty service, and then starting the sample service... or by installing the cxf minimal buundle, then starting my service, then immediately uninstalling the cxf minimal bundle, then restarting jetty... I think that was the order. Neither of these will work from a clean startup, and having to restart services as a procedure to get DOSGi working is just bad. I don't even know why installing then uninstalling would do anything -- it shouldn't be leaving any artifacts around.
First point, looking at the CXF DOSGi mega-bundle I think this is only for quick-n-dirty hacking in a bare OSGi runtime, basically the minimal environment provided by Equinox and Felix. It will not be intended for richer environments like FUSE or Servicemix as you will likely clash on services from the bundle and the platform, as you appear to have seen.
I was able to get Servicemix 4.0 to start cleanly (this is on Windows) and then I hot-deployed:
com.springsource.org.jdom-1.0.0.jar
cxf-bundle-minimal-2.2.1.jar
cxf-dosgi-ri-discovery-local-1.0.jar
cxf.dosgi-ri-dws.cxf-1.0.jar
Using the Servicemix console I listed all bundles and saw that all of the above were in the Active state (as expected). I listed the services and the 2 CXF DOSGi bundles were exporting services, so that appeared to have worked correctly. No errors were reported in the log.
How familiar are you with OSGi? Servicemix looks quite large and learning OSGi, Servicemix and CXF/DOSGi together isn't going to be easy (in my opinion).
The supplied console isn't great for the OSGi stuff and I'd suggest installing the Apache Felix console bundles for a web interface.