I am new to Freemarker and writing a new Software. Before starting of any functionality, I want to get a complete folder structure to be created dynamically on the basis of user input (like Project Name, package name etc.) But I am not able to find any better way to achieve this.
Here is the structure which I am looking for.
myProject
│ config.json
│ pom.xml
│
└───src
└───main
├───server
│ server-config.xml
│
└───resources
│ server-artifact.properties
│
└───api
api.json
I managed to generate server-config.xml with some dynamic values using Freemarker but not able to understand that how can I process these folders/files recursively. And also where should I be maintaining this project structure meta-data so that if there is any change in this structure then program should be able to adapt that change and generate those basic files/folders dynamically.
Thanks in advance.
Related
I have a series of go files that are linked by use but are logically independent. They all use a common set of helper functions defined in a separate file.
My directory structure is shown below.
src/
├── foo1.go
├── foo2.go
├── ...
├── fooN.go
└── helper/
└── helper.go
The foox.go files are all of this form -
package main
import help "./helper"
// functions and structs that use functionality in
// helper but are not related to anything going on
// in other foox.go files
func main() {
// more things that uses functionality in helper
// but are not related to anything going on in
// other foox.go files
return
}
I was running specific files with go run foox.go, but recently updated my version of Go. This workflow is now broken since relative imports are no longer permitted -
"./helper" is relative, but relative import paths are not supported in module mode
What is the correct the way to structure a collection independent Go files that all depend on the same collection of helper functions?
All the guidance says to use modules, but in this case that will mean having a separate module for each foox.go file where each file contains funcs, structs etc. that will never be used in any other module.
All I want to do is be able run a single .go file that includes another single local .go file without going through the hassle of making dozens of modules.
You can disable module mode by setting the environment variable GO111MODULE=off
I'm using teamcity to perform my builds.
within my repository there are multiple projects that uses different folders. e.g. like this:
└root
├project1
│ └files
├project2
│ └files
└project3
└files
I have 3 lanes that should all here just on there own folder.
The current trigger configuration for project2 looks like this:
-:*/project1/*
-:*/project3/*
+:*/project2/*
but I don't want to explicit add all projects to the trigger configuration of every project. therefore I would like to say s.th. like
-:IGNORE_EVERYTHING
+:*/project2/*
which means I just want to list the folder that SHOULD get monitored but not exclude all others. When I just use the last line of the above the two other folders gets monitored as well.
How do I do that?
According to documentation Configuring VCS Triggers:
When entering rules, please note that as soon as you enter any "+" rule, TeamCity will remove the default "include all" setting. To include all the files, use "+:." rule.
You don't need any exclusion rule. Just insert:
+:*/project2/*
in Trigger Rules and you should be good.
I am currently starting with Go and have already dug into the dos and don'ts regarding package naming and workspace folder structure.
Nevertheless, I am not quite sure how to properly organize my code according to the Go paradigm.
Here is my current structure example as it resides in $GOPATH/src:
github.com/myusername/project
|-- main.go
+-- internal
+---- config
|------ config.go
So i have the project called project which uses the config package which, in turn, is specialized in a way that it should only be used by project. Hence, I do not want it under github.com/myusername/config, right?
The question now is, is it "good" to use the internal package structure or should I instead put my project specific packages under github.com/myusername/$pkgname and indicate somehow that it belongs to project (e.g. name it projectconfig)?
If your project produces one single program then the most common structure is the one you mentioned.
If your project produces more than one program the common practice is using a structure like this:
<project>/
cmd/
prog1/
main.go
prog2/
main.go
If your project exposes go code as library for third party consumption the most common structure is using the project's root dir to expose the domain model and API.
<project>/
model.go
api.go
This is for third party code to just import "github.com/user/project" and have the model and api available.
Is common to see second and third options combined.
Also is considered good practice to have packages for encapsulating dependencies usage. E.g. suppose your project uses the elastic search client
<project>/
cmd/
prog1/
main.go
elastic/
impl.go
dao.go
model.go
So in dao.go you define the dao's API and then in elastic/impl.go you (import elastic library, domain modal and) define the implementation of DAO in terms of elastic. Finally you import everything from main.go which produces the actual program.
See this great and short presentation about this issue.
We change our deploy script to use liquibase but now I begin to have to some issues that I would like to have another opinion or maybe know a proper solution for that.
We are using Oracle, and we have a lot of legacy code: packages, functions, procedures, triggers.. (as you can see, a lot of logic in the data base).
We are using the follow structure:
.
..
packages
functions
triggers
baseline
S1301
S1302
S1312
xxx-changelog.xml
The xx-changelog.xml looks like this:
<include file="baseline/xxx-baseline-changelog.xml" relativeToChangelogFile="true" />
<!- Sprint change logs -->
<include file="S1304/xxx-s1304-changelog.xml" relativeToChangelogFile="true" />
<include file="S1308/xxx-s1308-changelog.xml" relativeToChangelogFile="true"/>
<include file="S1309/xxx-s1309-changelog.xml" relativeToChangelogFile="true"/>
<include file="S1310/xxx-s1310-changelog.xml" relativeToChangelogFile="true"/>
<include file="S1311/xxx-s1311-changelog.xml" relativeToChangelogFile="true"/>
Because we don't want to copy the file every time in a new folder, we are pointing to the same file, and because we are changing the content, we have to set runOnChange property, because if we don't do it will fail.
The thing is that we are working in Agile, and every 3 weeks we deliver new code, sometimes we have to change one package in one sprint and we have to change the same package in the next one.
My situation is:
1) if we add a new changeSet for each sprint, pointing to the file in packages folder, for example with runOnchange, this will execute all the changeSet that are pointing to these file because the content is diferent, and is runOnchange (is not what i want). But is the only way to know the changes in the sprint, and have track of this.
xxx-s1311-changelog.xml
<changeSet id="XXX_SEND_TO_PP_PCK_S1311" author="e-ballo" runOnChange="true">
<sqlFile path="../packages/XXX_SEND_TO_PP_PCK.pkb" splitStatements="false" relativeToChangelogFile="true"/>
</changeSet>
xxx-s1312-changelog.xml
<changeSet id="XXX_SEND_TO_PP_PCK_S1312" author="e-ballo" runOnChange="true">
<sqlFile path="../packages/XXX_SEND_TO_PP_PCK.pkb" splitStatements="false" relativeToChangelogFile="true"/>
</changeSet>
2) if we create a file only for packages (packages-changelog.xml) and we add the changeSet for the package with the property runOnChange, is going to run every time the file change, but you don't have visibility to know when we change it.
Maybe the best solution is to copy the file (the package) in the folder of the sprint, but I would like to keep the history of the file in the SVN and also have a clear idea of the new changes for sprint in the change log.
My question:
So do you know guys if there is some way to disable the hashmap in liquibase ? then I will be able to add it in every sprint and have a track.. and if the id is already in the database should not execute again, am I right ?
Thanks in advance,
I know this is old, answering for anyone who comes across it in the future.
No, the checksums are pretty ingrained in how Liquibase works, runOnChange is the correct way to do this. The problem is your should be more granular with your changelogs. Remember: changelogs can include other changelogs.
if we add a new changeSet for each sprint, pointing to the file in
packages folder, for example with runOnchange, this will execute all
the changeSet that are pointing to these file because the content is
diferent, and is runOnchange (is not what i want). But is the only way
to know the changes in the sprint, and have track of this.
Your project structure is good, you just need to take it a step farther. Make the actual changelogs to install the package/function/trigger/etc a part of those directories since they likely will never need to be changed once written anyway:
.
├── functions
│ ├── my_function_changelog.xml
│ └── sql
│ └── my_function.sql
├── packages
│ ├── my_package_changelog.xml
│ └── sql
│ └── my_package.sql
└── triggers
├── my_trigger_changelog.xml
└── sql
└── my_trigger.sql
Then when you need to include one in your release you include that static changelog instead of defining a new changeset every time (which, as you have found, confuses Liquibase):
<include file="../packages/my_package_changelog.xml" relativeToChangelogFile="true" />
Now you have trace-ability with what you did with each sprint without accidentally re-installing packages you didn't want to.
I have a set of bundles I created with Maven + BND. One of the bundles contains my Vaadin "Application", the others have some utilities and additional editors.
I can run this app on a Tomat server - everything is OK. Then I tried running in OSGI (Apache Felix). After many solved problems I finally managed to run the OSGI runtime and have all the bundles loaded and activated correctly.
I can even access the 1st page with "localhost:8080/bat" - it does not show "404 not found" anymore.
The problem is: The start page only shows some unformatted text from my app.
the app can't load its Vaadin resources (CSS, maybe widgetset, etc).
the debug frame says:
Starting Vaadin client side engine. Widgetset: com.vaadin.terminal.gwt.DefaultWidgetSet
Widget set is built on version: 6.6.6
Warning: widgetset version 6.6.6 does not seem to match theme version
Starting application bat-97301
Vaadin application servlet version: 6.6.6
Application version: 0.0.1
inserting load indicator
Making UIDL Request with params: init
Server visit took 9ms
...
Assuming CSS loading is not complete, postponing render phase. (.v-loading-indicator height == 0)
Assuming CSS loading is not complete, postponing render phase. (.v-loading-indicator height == 0)
....
CSS files may have not loaded properly.
looks like Vaadin resources can't be loaded.
so, the question is:
what's a proper structure for a Vaadin application packed as an OSGI bundle?
here's my OSGI bundle structure (created with Maven + BND):
(I skipped some Vaadin Reindeer theme folders as not important)
├───com
│ └───my
│ ├───demomodules
│ ├───preferences
│ ├───widgetset
│ └───workspaces
├───META-INF
├───VAADIN
│ ├───icons
│ ├───themes
│ │ ├───mytheme
│ │ └───reindeer
│ │ ├───a-sprite-definitions
│ └───widgetsets
│ ├───com.my.widgetset.Vaadin1Widgetset
│ │ └───ie6pngfix
│ └───WEB-INF
│ └───deploy
│ └───com.my.widgetset.Vaadin1Widgetset
│ ├───rpcPolicyManifest
│ └───symbolMaps
└───WEB-INF
I just recently did this exercise. Googling on vaadin and OSGi reveals that there are different takes on how to integrate and on which level, e.g. component or application. However, the key "realization point" is that you must arrange it so that the VAADIN resources are accessible from the client, i.e. can be served as resources from your "servlet". I don't think the bundle structure as such will help you here, you must deal with the Http Service and give it instructions on how to serve the stuff.
Take a look at the vaadin examples by neil bartlett at https://github.com/njbartlett/VaadinOSGi, specifcially the vaadinbridge project. That helped me in understanding the issues.
Another approach might be to deploy the bundle on an OSGi continer that understands WARs, such as Virgo. But that is just a guess.