Good practices - Use of LiquiBase packages - oracle

We change our deploy script to use liquibase but now I begin to have to some issues that I would like to have another opinion or maybe know a proper solution for that.
We are using Oracle, and we have a lot of legacy code: packages, functions, procedures, triggers.. (as you can see, a lot of logic in the data base).
We are using the follow structure:
.
..
packages
functions
triggers
baseline
S1301
S1302
S1312
xxx-changelog.xml
The xx-changelog.xml looks like this:
<include file="baseline/xxx-baseline-changelog.xml" relativeToChangelogFile="true" />
<!- Sprint change logs -->
<include file="S1304/xxx-s1304-changelog.xml" relativeToChangelogFile="true" />
<include file="S1308/xxx-s1308-changelog.xml" relativeToChangelogFile="true"/>
<include file="S1309/xxx-s1309-changelog.xml" relativeToChangelogFile="true"/>
<include file="S1310/xxx-s1310-changelog.xml" relativeToChangelogFile="true"/>
<include file="S1311/xxx-s1311-changelog.xml" relativeToChangelogFile="true"/>
Because we don't want to copy the file every time in a new folder, we are pointing to the same file, and because we are changing the content, we have to set runOnChange property, because if we don't do it will fail.
The thing is that we are working in Agile, and every 3 weeks we deliver new code, sometimes we have to change one package in one sprint and we have to change the same package in the next one.
My situation is:
1) if we add a new changeSet for each sprint, pointing to the file in packages folder, for example with runOnchange, this will execute all the changeSet that are pointing to these file because the content is diferent, and is runOnchange (is not what i want). But is the only way to know the changes in the sprint, and have track of this.
xxx-s1311-changelog.xml
<changeSet id="XXX_SEND_TO_PP_PCK_S1311" author="e-ballo" runOnChange="true">
<sqlFile path="../packages/XXX_SEND_TO_PP_PCK.pkb" splitStatements="false" relativeToChangelogFile="true"/>
</changeSet>
xxx-s1312-changelog.xml
<changeSet id="XXX_SEND_TO_PP_PCK_S1312" author="e-ballo" runOnChange="true">
<sqlFile path="../packages/XXX_SEND_TO_PP_PCK.pkb" splitStatements="false" relativeToChangelogFile="true"/>
</changeSet>
2) if we create a file only for packages (packages-changelog.xml) and we add the changeSet for the package with the property runOnChange, is going to run every time the file change, but you don't have visibility to know when we change it.
Maybe the best solution is to copy the file (the package) in the folder of the sprint, but I would like to keep the history of the file in the SVN and also have a clear idea of the new changes for sprint in the change log.
My question:
So do you know guys if there is some way to disable the hashmap in liquibase ? then I will be able to add it in every sprint and have a track.. and if the id is already in the database should not execute again, am I right ?
Thanks in advance,

I know this is old, answering for anyone who comes across it in the future.
No, the checksums are pretty ingrained in how Liquibase works, runOnChange is the correct way to do this. The problem is your should be more granular with your changelogs. Remember: changelogs can include other changelogs.
if we add a new changeSet for each sprint, pointing to the file in
packages folder, for example with runOnchange, this will execute all
the changeSet that are pointing to these file because the content is
diferent, and is runOnchange (is not what i want). But is the only way
to know the changes in the sprint, and have track of this.
Your project structure is good, you just need to take it a step farther. Make the actual changelogs to install the package/function/trigger/etc a part of those directories since they likely will never need to be changed once written anyway:
.
├── functions
│   ├── my_function_changelog.xml
│   └── sql
│   └── my_function.sql
├── packages
│   ├── my_package_changelog.xml
│   └── sql
│   └── my_package.sql
└── triggers
├── my_trigger_changelog.xml
└── sql
└── my_trigger.sql
Then when you need to include one in your release you include that static changelog instead of defining a new changeset every time (which, as you have found, confuses Liquibase):
<include file="../packages/my_package_changelog.xml" relativeToChangelogFile="true" />
Now you have trace-ability with what you did with each sprint without accidentally re-installing packages you didn't want to.

Related

Run a single Go file with a single local import without using modules

I have a series of go files that are linked by use but are logically independent. They all use a common set of helper functions defined in a separate file.
My directory structure is shown below.
src/
├── foo1.go
├── foo2.go
├── ...
├── fooN.go
└── helper/
└── helper.go
The foox.go files are all of this form -
package main
import help "./helper"
// functions and structs that use functionality in
// helper but are not related to anything going on
// in other foox.go files
func main() {
// more things that uses functionality in helper
// but are not related to anything going on in
// other foox.go files
return
}
I was running specific files with go run foox.go, but recently updated my version of Go. This workflow is now broken since relative imports are no longer permitted -
"./helper" is relative, but relative import paths are not supported in module mode
What is the correct the way to structure a collection independent Go files that all depend on the same collection of helper functions?
All the guidance says to use modules, but in this case that will mean having a separate module for each foox.go file where each file contains funcs, structs etc. that will never be used in any other module.
All I want to do is be able run a single .go file that includes another single local .go file without going through the hassle of making dozens of modules.
You can disable module mode by setting the environment variable GO111MODULE=off

Generate Project Structure Dynamically Freemarker/FMPP

I am new to Freemarker and writing a new Software. Before starting of any functionality, I want to get a complete folder structure to be created dynamically on the basis of user input (like Project Name, package name etc.) But I am not able to find any better way to achieve this.
Here is the structure which I am looking for.
myProject
│ config.json
│ pom.xml
│
└───src
└───main
├───server
│ server-config.xml
│
└───resources
│ server-artifact.properties
│
└───api
api.json
I managed to generate server-config.xml with some dynamic values using Freemarker but not able to understand that how can I process these folders/files recursively. And also where should I be maintaining this project structure meta-data so that if there is any change in this structure then program should be able to adapt that change and generate those basic files/folders dynamically.
Thanks in advance.

Teamcity build trigger exclude all but some

I'm using teamcity to perform my builds.
within my repository there are multiple projects that uses different folders. e.g. like this:
└root
├project1
│ └files
├project2
│ └files
└project3
└files
I have 3 lanes that should all here just on there own folder.
The current trigger configuration for project2 looks like this:
-:*/project1/*
-:*/project3/*
+:*/project2/*
but I don't want to explicit add all projects to the trigger configuration of every project. therefore I would like to say s.th. like
-:IGNORE_EVERYTHING
+:*/project2/*
which means I just want to list the folder that SHOULD get monitored but not exclude all others. When I just use the last line of the above the two other folders gets monitored as well.
How do I do that?
According to documentation Configuring VCS Triggers:
When entering rules, please note that as soon as you enter any "+" rule, TeamCity will remove the default "include all" setting. To include all the files, use "+:." rule.
You don't need any exclusion rule. Just insert:
+:*/project2/*
in Trigger Rules and you should be good.

Multiple VCS Trigger with different "Per-checkin Triggering" for different branches

I need two VCS Triggers with different Per-Checkin Triggering rules based on a banch filter.
The reason: For the "release-*" & "master" branch when I merge everything in i don't want a build created per checkin, however i do when using any of the other branches. I though i could do this by adding a second trigger filtering the branches so they looked something like this:
The first VCS Trigger, this will build all of these branches with "Trigger a build on each check-in" checked
-:*
+:refs/heads/hotfix/hotfix-*
+:refs/heads/develop
+:refs/heads/feature/feature-*
The second VCS Trigger, this will build all of these branches with "Trigger a build on each check-in" unchecked
-:*
+:refs/heads/release/release-*
+:refs/heads/master
(Please excuse my not so epic paint skills)
Is there another way I can do this?
Thanks
Steve
I couldn't find how to add 2 VCS triggers on a single build configuration, have you tried that?
I'm on TC 10 though, but if that really doesn't work then only way i can think is just to create 2 separate builds. :|
The solution was to modify the build configuration XML. Steps were:
Locate your TeamCity Project folder which is a subdir of the TeamCity Data Directory, mine was C:\ApplicationData\TeamCity\config\projects.
Find the build configuration in which every project subfolder it lives under example: C:\ApplicationData\TeamCity\config\projects\parentProj_Proj\buildTypes\build_config_name.xml
At the bottom of this file was where i found the build triggers section, find in there the current build trigger you have an duplicate it, but remember to change the "id" attribute on the "build-trigger" element. So my final config looks like this:
<build-triggers>
<build-trigger id="vcsTrigger" type="vcsTrigger">
<parameters>
<param name="branchFilter"><![CDATA[-:*
+:refs/heads/hotfix/hotfix-*
+:refs/heads/develop
+:refs/heads/feature/feature-*]]></param>
<param name="groupCheckinsByCommitter" value="true" />
<param name="perCheckinTriggering" value="true" />
<param name="quietPeriodMode" value="DO_NOT_USE" />
</parameters>
</build-trigger>
<build-trigger id="vcsTrigger1" type="vcsTrigger">
<parameters>
<param name="branchFilter"><![CDATA[-:*
+:refs/heads/release/release-*
+:refs/heads/master]]></param>
<param name="quietPeriodMode" value="DO_NOT_USE" />
</parameters>
</build-trigger>
</build-triggers>
This although probably unsupported seems to work just fine.

Proper folder structure for go packages used by a single go project

I am currently starting with Go and have already dug into the dos and don'ts regarding package naming and workspace folder structure.
Nevertheless, I am not quite sure how to properly organize my code according to the Go paradigm.
Here is my current structure example as it resides in $GOPATH/src:
github.com/myusername/project
|-- main.go
+-- internal
+---- config
|------ config.go
So i have the project called project which uses the config package which, in turn, is specialized in a way that it should only be used by project. Hence, I do not want it under github.com/myusername/config, right?
The question now is, is it "good" to use the internal package structure or should I instead put my project specific packages under github.com/myusername/$pkgname and indicate somehow that it belongs to project (e.g. name it projectconfig)?
If your project produces one single program then the most common structure is the one you mentioned.
If your project produces more than one program the common practice is using a structure like this:
<project>/
cmd/
prog1/
main.go
prog2/
main.go
If your project exposes go code as library for third party consumption the most common structure is using the project's root dir to expose the domain model and API.
<project>/
model.go
api.go
This is for third party code to just import "github.com/user/project" and have the model and api available.
Is common to see second and third options combined.
Also is considered good practice to have packages for encapsulating dependencies usage. E.g. suppose your project uses the elastic search client
<project>/
cmd/
prog1/
main.go
elastic/
impl.go
dao.go
model.go
So in dao.go you define the dao's API and then in elastic/impl.go you (import elastic library, domain modal and) define the implementation of DAO in terms of elastic. Finally you import everything from main.go which produces the actual program.
See this great and short presentation about this issue.

Resources