I have a set of bundles I created with Maven + BND. One of the bundles contains my Vaadin "Application", the others have some utilities and additional editors.
I can run this app on a Tomat server - everything is OK. Then I tried running in OSGI (Apache Felix). After many solved problems I finally managed to run the OSGI runtime and have all the bundles loaded and activated correctly.
I can even access the 1st page with "localhost:8080/bat" - it does not show "404 not found" anymore.
The problem is: The start page only shows some unformatted text from my app.
the app can't load its Vaadin resources (CSS, maybe widgetset, etc).
the debug frame says:
Starting Vaadin client side engine. Widgetset: com.vaadin.terminal.gwt.DefaultWidgetSet
Widget set is built on version: 6.6.6
Warning: widgetset version 6.6.6 does not seem to match theme version
Starting application bat-97301
Vaadin application servlet version: 6.6.6
Application version: 0.0.1
inserting load indicator
Making UIDL Request with params: init
Server visit took 9ms
...
Assuming CSS loading is not complete, postponing render phase. (.v-loading-indicator height == 0)
Assuming CSS loading is not complete, postponing render phase. (.v-loading-indicator height == 0)
....
CSS files may have not loaded properly.
looks like Vaadin resources can't be loaded.
so, the question is:
what's a proper structure for a Vaadin application packed as an OSGI bundle?
here's my OSGI bundle structure (created with Maven + BND):
(I skipped some Vaadin Reindeer theme folders as not important)
├───com
│ └───my
│ ├───demomodules
│ ├───preferences
│ ├───widgetset
│ └───workspaces
├───META-INF
├───VAADIN
│ ├───icons
│ ├───themes
│ │ ├───mytheme
│ │ └───reindeer
│ │ ├───a-sprite-definitions
│ └───widgetsets
│ ├───com.my.widgetset.Vaadin1Widgetset
│ │ └───ie6pngfix
│ └───WEB-INF
│ └───deploy
│ └───com.my.widgetset.Vaadin1Widgetset
│ ├───rpcPolicyManifest
│ └───symbolMaps
└───WEB-INF
I just recently did this exercise. Googling on vaadin and OSGi reveals that there are different takes on how to integrate and on which level, e.g. component or application. However, the key "realization point" is that you must arrange it so that the VAADIN resources are accessible from the client, i.e. can be served as resources from your "servlet". I don't think the bundle structure as such will help you here, you must deal with the Http Service and give it instructions on how to serve the stuff.
Take a look at the vaadin examples by neil bartlett at https://github.com/njbartlett/VaadinOSGi, specifcially the vaadinbridge project. That helped me in understanding the issues.
Another approach might be to deploy the bundle on an OSGi continer that understands WARs, such as Virgo. But that is just a guess.
Related
I have a series of go files that are linked by use but are logically independent. They all use a common set of helper functions defined in a separate file.
My directory structure is shown below.
src/
├── foo1.go
├── foo2.go
├── ...
├── fooN.go
└── helper/
└── helper.go
The foox.go files are all of this form -
package main
import help "./helper"
// functions and structs that use functionality in
// helper but are not related to anything going on
// in other foox.go files
func main() {
// more things that uses functionality in helper
// but are not related to anything going on in
// other foox.go files
return
}
I was running specific files with go run foox.go, but recently updated my version of Go. This workflow is now broken since relative imports are no longer permitted -
"./helper" is relative, but relative import paths are not supported in module mode
What is the correct the way to structure a collection independent Go files that all depend on the same collection of helper functions?
All the guidance says to use modules, but in this case that will mean having a separate module for each foox.go file where each file contains funcs, structs etc. that will never be used in any other module.
All I want to do is be able run a single .go file that includes another single local .go file without going through the hassle of making dozens of modules.
You can disable module mode by setting the environment variable GO111MODULE=off
I am new to Freemarker and writing a new Software. Before starting of any functionality, I want to get a complete folder structure to be created dynamically on the basis of user input (like Project Name, package name etc.) But I am not able to find any better way to achieve this.
Here is the structure which I am looking for.
myProject
│ config.json
│ pom.xml
│
└───src
└───main
├───server
│ server-config.xml
│
└───resources
│ server-artifact.properties
│
└───api
api.json
I managed to generate server-config.xml with some dynamic values using Freemarker but not able to understand that how can I process these folders/files recursively. And also where should I be maintaining this project structure meta-data so that if there is any change in this structure then program should be able to adapt that change and generate those basic files/folders dynamically.
Thanks in advance.
I am currently starting with Go and have already dug into the dos and don'ts regarding package naming and workspace folder structure.
Nevertheless, I am not quite sure how to properly organize my code according to the Go paradigm.
Here is my current structure example as it resides in $GOPATH/src:
github.com/myusername/project
|-- main.go
+-- internal
+---- config
|------ config.go
So i have the project called project which uses the config package which, in turn, is specialized in a way that it should only be used by project. Hence, I do not want it under github.com/myusername/config, right?
The question now is, is it "good" to use the internal package structure or should I instead put my project specific packages under github.com/myusername/$pkgname and indicate somehow that it belongs to project (e.g. name it projectconfig)?
If your project produces one single program then the most common structure is the one you mentioned.
If your project produces more than one program the common practice is using a structure like this:
<project>/
cmd/
prog1/
main.go
prog2/
main.go
If your project exposes go code as library for third party consumption the most common structure is using the project's root dir to expose the domain model and API.
<project>/
model.go
api.go
This is for third party code to just import "github.com/user/project" and have the model and api available.
Is common to see second and third options combined.
Also is considered good practice to have packages for encapsulating dependencies usage. E.g. suppose your project uses the elastic search client
<project>/
cmd/
prog1/
main.go
elastic/
impl.go
dao.go
model.go
So in dao.go you define the dao's API and then in elastic/impl.go you (import elastic library, domain modal and) define the implementation of DAO in terms of elastic. Finally you import everything from main.go which produces the actual program.
See this great and short presentation about this issue.
I am new at Grails and I have something to ask who is expert in Grails. I use asset pipeline as resources management in my project. Everything is good, But there is an issue that, whether my resources file (scss filess, coffee script files, ...) is changed or not, the resources are compiled every time views are rendered (in dev and test environment). This makes project run slow. Is there any solution to cache resource in asset pipeline, therefore if there is not any changing, the resources are not compiled. Thanks!
If you are using require to build a require tree and then refer the tree in your views then you can directly exclude raw resources getting pre-compiled every time by the plugin. For example:
If you have a require tree under grails-app/assets/javascripts/application.js as
//= require jquery
//= require app/models.js
//= require_tree views
//= require_self
or .coffee
#= require app/models.js
#= require test
#= require_self
#= require_tree .
And you don't want models.js getting pre-compiled everytime the view using the require tree is rendered then add the configuration as below:
grails.assets.excludes = ["app/models.js"] //app/*js for all resources under app
Above config informs the plugin to avoid the precompilation of resources and will only be compiled when the asset is referred in the view and asset has any changes.
You can find more on the Usage documentation, mainly
Optionally, assets can be excluded from processing if included by your
require tree. This can dramatically reduce compile time for your
assets.
Above config can be environment specific and can be used only for dev and test. For Production environment and/or war the pre-compilation would not matter.
environments {
development {
grails.assets.excludes = ["app/models.js"]
}
}
We change our deploy script to use liquibase but now I begin to have to some issues that I would like to have another opinion or maybe know a proper solution for that.
We are using Oracle, and we have a lot of legacy code: packages, functions, procedures, triggers.. (as you can see, a lot of logic in the data base).
We are using the follow structure:
.
..
packages
functions
triggers
baseline
S1301
S1302
S1312
xxx-changelog.xml
The xx-changelog.xml looks like this:
<include file="baseline/xxx-baseline-changelog.xml" relativeToChangelogFile="true" />
<!- Sprint change logs -->
<include file="S1304/xxx-s1304-changelog.xml" relativeToChangelogFile="true" />
<include file="S1308/xxx-s1308-changelog.xml" relativeToChangelogFile="true"/>
<include file="S1309/xxx-s1309-changelog.xml" relativeToChangelogFile="true"/>
<include file="S1310/xxx-s1310-changelog.xml" relativeToChangelogFile="true"/>
<include file="S1311/xxx-s1311-changelog.xml" relativeToChangelogFile="true"/>
Because we don't want to copy the file every time in a new folder, we are pointing to the same file, and because we are changing the content, we have to set runOnChange property, because if we don't do it will fail.
The thing is that we are working in Agile, and every 3 weeks we deliver new code, sometimes we have to change one package in one sprint and we have to change the same package in the next one.
My situation is:
1) if we add a new changeSet for each sprint, pointing to the file in packages folder, for example with runOnchange, this will execute all the changeSet that are pointing to these file because the content is diferent, and is runOnchange (is not what i want). But is the only way to know the changes in the sprint, and have track of this.
xxx-s1311-changelog.xml
<changeSet id="XXX_SEND_TO_PP_PCK_S1311" author="e-ballo" runOnChange="true">
<sqlFile path="../packages/XXX_SEND_TO_PP_PCK.pkb" splitStatements="false" relativeToChangelogFile="true"/>
</changeSet>
xxx-s1312-changelog.xml
<changeSet id="XXX_SEND_TO_PP_PCK_S1312" author="e-ballo" runOnChange="true">
<sqlFile path="../packages/XXX_SEND_TO_PP_PCK.pkb" splitStatements="false" relativeToChangelogFile="true"/>
</changeSet>
2) if we create a file only for packages (packages-changelog.xml) and we add the changeSet for the package with the property runOnChange, is going to run every time the file change, but you don't have visibility to know when we change it.
Maybe the best solution is to copy the file (the package) in the folder of the sprint, but I would like to keep the history of the file in the SVN and also have a clear idea of the new changes for sprint in the change log.
My question:
So do you know guys if there is some way to disable the hashmap in liquibase ? then I will be able to add it in every sprint and have a track.. and if the id is already in the database should not execute again, am I right ?
Thanks in advance,
I know this is old, answering for anyone who comes across it in the future.
No, the checksums are pretty ingrained in how Liquibase works, runOnChange is the correct way to do this. The problem is your should be more granular with your changelogs. Remember: changelogs can include other changelogs.
if we add a new changeSet for each sprint, pointing to the file in
packages folder, for example with runOnchange, this will execute all
the changeSet that are pointing to these file because the content is
diferent, and is runOnchange (is not what i want). But is the only way
to know the changes in the sprint, and have track of this.
Your project structure is good, you just need to take it a step farther. Make the actual changelogs to install the package/function/trigger/etc a part of those directories since they likely will never need to be changed once written anyway:
.
├── functions
│ ├── my_function_changelog.xml
│ └── sql
│ └── my_function.sql
├── packages
│ ├── my_package_changelog.xml
│ └── sql
│ └── my_package.sql
└── triggers
├── my_trigger_changelog.xml
└── sql
└── my_trigger.sql
Then when you need to include one in your release you include that static changelog instead of defining a new changeset every time (which, as you have found, confuses Liquibase):
<include file="../packages/my_package_changelog.xml" relativeToChangelogFile="true" />
Now you have trace-ability with what you did with each sprint without accidentally re-installing packages you didn't want to.