I use the Alfresco SDK with the following command:
mvn install -Ddependency.surf.version=6.3 -Prun
All is fine, except when it gets stuck at this step of Building Alfresco Share WAR Aggregator:
[INFO] --- maven-war-plugin:2.6:war (default-war) # share ---
[INFO] Packaging webapp
[INFO] Assembling webapp [share] in [/home/nico/aegif/projects/60_townpage/townpage-filing/townpage-filing/share/target/share-1.0-SNAPSHOT]
[info] Copying manifest...
[INFO] Processing war project
[INFO] Processing overlay [ id org.alfresco:share]
In such cases I just perform a clean and the problem is solved, but that takes time.
Is there anything I can do to avoid it getting stuck?
alfresco.version is 5.1.g
Ubuntu 2016.10 LTS
Given the parameters you are using, I assume you are on Alfresco SDK 2.2, and trying to use a more recent version of alfresco (5.1.f or newer) on a All In One project.
Using Alfresco SDK AIO projects always adds some overhead during restarts because the SDK is actually building your modules, fetching the wars, fetching additional modules referenced and applies the modules to the wars (as in unzipping the war and unzipping the amps on the same folder before re-packaging the wars), then it starts up an embedded tomcat with some special config from the runner project with the new wars! A complicated approach, if you ask me, and it is definitely expected to take a considerable amount of time and performance (especially on Disk IO), especially when you clean before you rebuild...
Back to your question, the step you are hanging on if when the SDK is trying to unzip the OOTB share war prior to applying amps to it, and there is a lot of reasons why things could go south there! And unless you rovide some more detailed steps (as in adding -X or -e to your mvn command) I doubt any one would be able to catch precisely what is going wrong !
Be careful with running your project without cleaning, as you might end up with some risidual files that give you a different behaviour from the one to be expected from final artifacts... I can imagine at least a couple of these scenarios !
Alternatively, may I suggest that you switch from AIO approach to seperate projects for Repo and Share ? You can install multiple tomcats on your machine: Let's say a tomcat for repo on port 8080 and a tomcat for share on 8081, then you can develop on one tier while having a tomcat service provide the other one (Stop the share tomcat service, and start up a share amp from the SDK pointing to the local Alfresco Repo service on the the other locally installed tomcat) that way you can rapidly always clean and run with this command for running share:
mvn clean install -PampToWar -Dmaven.tomcat.port=8081 -Ddependency.surf.version=6.3
Related
I have a maven project structured:
/root
/CommonProject
/executable1
/executable2
/subroot
/subrootCommon
/...
So far I am trying to just deploy executable1.
I wanted to the project to use Java 19, I am fine with Java 17 if it's easier.
When I activate cloud shell, I am able to:
Change $JAVA_HOME to jdk 17
clone project
maven package it
run it in cloud shell.
However my project has no mapping for "/", just specific endpoints like "/test/hello" and I do not see anything in WebPreview on port 8080.
I have tried different ways to deploy, I am not familiar enough with Docker, so I tried CloudRun with Cloud Build with trigger from Source.
Here lies my current problem - every build has failed so far. It is using jdk 11 which is a problem (or at least one of them).
I have tried also adding cloudbuild.yaml or local Dockerfile just to deploy jar built manually, but I am still failing.
+FROM openjdk:17
+COPY root/target/executable1-1.0-SNAPSHOT-jar-with-dependencies.jar /home/user/var/run/executable1.jar
+CMD ["java", "-jar", "/home/user/var/run/executable1.jar"]
I have done the same steps to deploy, which were shown in how-to-guides or available online labs, so I think the issue is with the fact that maybe buildpacks do not process correctly projects with dependencies?
executable1 and executable2 depend on CommonProject. Do I need to split my big maven project into separate projects, to build each of them individually?
I have tried Dockerfile, cloudbuild.yaml, something like project.toml.
I would like to deploy for now just 1 project, at one point in the future all executable projects from this maven.
I have a quick question. Do you have a light-4j framework docker image hosted somewhere in which I can just add my API jar and run docker? I am getting a hard time running my APIs generated using codegen CLI in docker. It consistently gives me Error: Could not find or load main class com.networknt.server.Server error
Have you tried mvn clean install exec:exec? If you want to run with the jar file, you need to build with mvn clean install -Prelease to generate the final fat jar.
This is one of the features contributed by one of the members to speed up the testing cycle to avoid building all extra artifacts for each cycle. It might confuse new developers, though. The generated README.md has some information on how to build and start. Let me know it is not clear, and I will add extra info. When you run the build.sh to generate a docker image, it will be built with -Prelease in the script.
TL;DR: In order to debug the client-side, I ran gwt:run goal, launched the application on Chrome, and after login in, it threw the exception below, and GWT dev mode didn't launch (none of the client-side breakpoints worked)
javax.el.ELException: /pages/common/gwt/commonLayoutGWT.xhtml: setAttribute: Non-serializable attribute with name sessionBean
at com.sun.faces.facelets.compiler.TextInstruction.write(TextInstruction.java:90)
at com.sun.faces.facelets.compiler.UIInstructions.encodeBegin(UIInstructions.java:82)
at com.sun.faces.facelets.compiler.UILeaf.encodeAll(UILeaf.java:183)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1859)
at javax.faces.component.UIComponent.encodeAll(UIComponent.java:1859)
at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:456)
...
What should I do, or what should I check to get my GWT Dev mode working properly?
Some background: previously, our team used some ant scripts for compiling, debugging (server and client code), running and deploying our main application. It worked without a problem, although the process was really cumbersome and manual. We decided to make it a Maven application some months ago, and we were able to successfully execute all actions/goals after using
Maven. Compiling, running and deploying the application became fast and convenient, which was our goal.
But up until now we didn't notice that at some point in the process, our client side debugging stopped working. Only after we got some bug reports and started trying to debug them, we did notice the issue. So now I need to set the GWT Dev Mode, and haven't been able to do it no matter what I've tried.
I'm working with:
SmartGWT 4.0
JDK 1.8.0_121
GWT Eclipse Plugin 2.8.0
GlassFish 4.1
Maven 3.3
I tried to follow some instructions from gwtproject, using the default linker xsiframe into gwt.xml file.
But when I executed the gwt:compile goal, this error showed up:
[INFO] Linking into D:\Development\Repos\Git\Java\MyApp\myApp\target\classes\..\..\..\myApp\WebContent\pages\module\gwt\com.myapp.client.gwt.MyAppClient
[INFO] Invoking Linker Cross-Site-Iframe
[INFO] [ERROR] The Cross-Site-Iframe linker does not support <script> tags in the gwt.xml files, but the gwt.xml file (or the gwt.xml files which it includes) contains the following script tags:
To solve this, I used one of the recommended solutions by the error itself:
"...add this property to the gwt.xml file: <set-configuration-property name='xsiframe.failIfScriptTag' value='FALSE'/>"
I ran the gwt:compile again, and it finished successfuly
I suggest running your server-side code in a separate server (could be mvn jetty:run), and run GWT Dev Mode only for the clientside code (use <noserver>true</noserver>).
That solves so many problems (with running webapps inside DevMode's embedded server) that's it's the recommended setup nowadays.
Currently I have a situation, where I develop a project, then run mvn install and it get's put into my local Maven repository as a simple JAR file
Then, I have a crafted by some other guys "environment" which includes a whole lot of bundles and stuff, and is ultimately run via mvn pax:run and it takes like 5 minutes to run
I would like to be able to just run felix:update <bundle-name> but I cannot fill the gap between a Maven JAR artifact in local Maven repo, and a ready-for-provisioning bundle that I could put somewhere to just run felix:update or whatever, maybe uninstall/install
When I try to run mvn pax:create-bundle with my project, it throws a Containing project does not have packaging type 'pom' exception
Any help is highly appreciated
UPDATE: I've noticed that problems with re-installed bundle begin in it's activator, with a ClassNotFoundException (although the class mentioned is and always present in the bundle, so it must an issue with classpath, ClassLoader setup or whatever)
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation BundleWiringImpl.java:1574 at org.apache.felix.framework.BundleWiringImpl.access$400 BundleWiringImpl.java:79 at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass BundleWiringImpl.java:2018 at java.lang.ClassLoader.loadClass ClassLoader.java:357 at some.external.adapters.package.guice.SomeModule.configure SomeModule.java:46 at com.google.inject.AbstractModule.configure AbstractModule.java:59
If you have a path to a file which is the newly built bundle, you can update it from the Gogo shell as follows:
felix:update <bundleid> file:/path/to/file
refresh
Where <bundleid> is the numeric ID of the bundle that you want to update. The refresh command is needed in case any bundles depend on or import packages from the bundle you are updating; this command will cause the Framework to attempt to re-resolve them using the new dependency.
I am happy to accept #neil-bartlett's answer, though I have to add some more context:
1) one of the biggest issues I had initially is that a JAR-file in local Maven repo IS NOT THE SAME as an OSGi bundle. In order to create bundle, I had to run mvn bundle:bundle AFTER mvn install. And the bundle got created in target/ folder.
2) afterwards, in a GoGo shell, I could indeed run felix:update <bundle-id> file:C:/Users/blablabla/bundle-SNAPSHOT-2.0.jar, and for some reasons, these days it just works. The exceptions, mentioned in updates to original post, are indeed occurring, but they do not prevent proper installation of an updated bundle.
So, I'm writing the build and the deploy scripts. To create the build, I used ant. The continuous build is done with Jenkins.
The build generates 3 different artifacts:
The war file
A zip with layouts
A zip with images
So far, so good, but now I need to write the deploy script, which should:
Deploy the war (artifact 1) to the tomcat running at server 1
Place the artifact 2 at server 1 in a specific directory
Place the artifact 3 at server 2 in a specific directory
So I was talking with my colleague and he said that we should also generate an artifact (maybe deploy.xml) that deploys these artifacts when placed at the correct server.
So there would be another script, that would:
Download the jenkins artifacts
scp to each server and place the deploy.xml there
remotely invoke the deploy.xml
What makes me a little uncomfortable is the act of having the deploy.xml as a build artifact. The motivation behind this would be to be able to make a deploy without needing to have access to the VCS repositories, so a build would be self-contained, ie, any build could go into production only with what was generated by Jenkins.
Where should the deploy scripts be placed? Should they be only at the VCS or should they be build artifacts too?
Please provide if any sample deploy scripts
I wrote my own deployment framework, consisting of different shell, batch, python, and .... scripts. It neatly separates environment information from application information and allows me to quickly update deployment information and add new apps or environment. However, the orchestration of the different parts is done by Jenkins. When just copying files to a Windows server, my Jenkins master (running on Windows) just copies the files to a network share that exposes the target directory. Services I can restart remotly using sc.exe. When crossing the borders to AIX, I use jenkins slaves that are started via ssh on the target system. So distribution is managed by Jenkins. The actual work is done by the scripts.