With files should I ignore in Oracle SOA Applications? - jdeveloper

Does someone know which files could I ignore when versioning Oracle SOA Aplications and Oracle SOA projects?
I usually ignore .data folder in my Oracle SOA aplications and deploy, SCA-INF and .design folders in my Oracle SOA projects.
.adf has some important configurations that I donĀ“t know if is recriated. Can I ignore it too?
Is there any other folder to ignore?

While versioning a SOA project, you can ignore the .designer, classes & SCA-INF folders.
You have to keep testsuites and other project specific files.

Related

Liquibase: tracking changelogs

The project
We are ~50 developers plus DevOps staff and run ~30 Oracle 12c EE instances. We have introduced Liquibase in 2018.
We use the Liquibase Maven plugin version 3.8.8, changelogs are stored in numerous Maven projects, these are commited to subversion in the usual trunk/tag/branch structure.
The Goal
We want to ease provisioning of new database instances with release versions matching the respective environments. A typical use case is setting up a fresh database in the integration test environment.
One would start with an empty database schema and apply changelogs up to a certain version.
Unfortunately, changelogs that were applied to a schema are often stored in different Maven projects. This makes them hard to locate.
Liquibase does NOT store actual changeset contents (concrete DDL) in the DATABASECHANGELOG table. This would solve the problem.
In search of a solution, I first used maven to store the changelog's SVN revision into the DATABASECHANGELOG when liquibase:update was executed.
Retrieving changelogs based on revision number was error prone.
I have spend a week now to find a robust solution, googled for hours and built several test cases (with adapted parent and and concrete poms, partly using the maven scm plugin and such) but without luck. Initially, I planned to use liquibase:tag to store file path + revision, but this works only if all changesets are in one single changelog file, which is not the case.
Of course, it's desirable to have all changelogs stored in ONE location,
but this is not always possible. For example, scripts that require DBA privileges must to be committed to extra maven projects.
I need a strong reference between each changeset and the corresponding changelog file, or the changelog must be stored directly in the DATABASECHANGELOG.
With our current setup, "Database versioning" with Liquibase is not possible. There is theoretical
tracebility, but it is up to the users to somehow locate original changelogs in a huge mess of 100+ separate Maven projects.
Question 1: Is it possible to store the actual changelog content for each changeset into the DATABASECHANGELOG?
Question 2: If not, how can one preserve a reference beetween a DATABASECHANGELOG entry and the originating changelog file?
(Also, what happens, when a changelog file is deleted from subversion by accident? The DATABASECHANGELOG would just tell me the date and time of the change, some details and a file name - pretty useless, because the actual file would be gone and there would be no way to restore the actual DDL. To prevent such a scenario, I would backup all changelog files. To do that, DATABASECHANGELOG meta data is insufficient, as Liquibase does not track SVN revisions and file paths.)
One option would be to combine various SVN repositories into new one using SVN Externals and then create new changelog file.
You can map URLs (SVN Tag/Branch/Revisions) to a folder without copying using SVN Externals. http://svnbook.red-bean.com/en/1.7/svn.advanced.externals.html.
Hope it helps.

Maven, How to Package ONLY Changed Files?

I have a project of PL/SQL files (stored procedures). I need to hand an archive of only files that have changed to the DBA to execute for deployment to production. How can I create an archive with Maven that only contains files from the project that have changed since the last release?
Thanks
In case of database scripts, look for solutions like liquibase or solidbase. These frameworks are capable of recognizing which scripts have already been executed and where to start from.

Websphere EAR limit

WebSphere Application Server 8.0.0.0
I am using RAD to make Host Access Transformation Services (HATS) macros and deploy them as webservices on WAS. I'm a .NET developer and have no (very little) experience with WAS and Java EE.
When I was discussing some things with people who have more experience it was mentioned that we should avoid making multiple EAR files and deploying them onto WAS. It would be preferred if we could keep them as low as possible, ideal would be only a single EAR deployed. The WAR modules are fine if masses of them exist.
Is there any truth in this or would it be ok if we have more EAR files deployed on WAS?
I haven't seen any warning like that, and we run 20-30 EARs on our servers. It's definitely supported and expected that you run multiple Enterprise Applications (EARs).

Rebuild IBM Portal embedded Derby database without a full reinstall

After exhausting all conceivable options over a matter of weeks, and after the drudgery of the back and forth with subpar IBM support, I have come to the conclusion that the only explanation for why my specific development environment fails to run a custom theme where other environments have no problems must have something to do with bad data in configurations contained in the embedded Derby database that comes packaged in the WebSphere Portal profile.
Google gives me no insights into the error I am running into, and I have confirmed the correctness of every single configuration file that even has the slightest chance of impacting the use of the Portal within a simple page. Any and all types of caching or logs have been disabled and purged and tracing reveals no additional information that is helpful to diagnosing the problem.
Are there any scripts within the installation of Portal that can be run to wipe and rebuild the embedded database? If not is the only option to scorch earth? The schema and data are cryptic to me, but if it is possible to diagnose the database for specific problems are there any tools that can do that or do I need detailed architectural knowledge to have any hope of finding bad data in this software?
I finally discovered what the problem is and it does indeed have nothing to do with a corrupt database, but in actuality is an inherent conflict with packaged WAR files having Subversion metadata information on the WAS Portal platform.
When running any WAR or EAR file in WAS or any WAS based product, make sure to exclude all Subversion metadata files and folders from the build. It apparently brings WAS and Portal to its knees.

What is the recommended way to setup projects like this?

We are working on a large project. The project has multiple external sites and multiple internal sites all stored in Subversion.
The external sites allow a customer to make requests of various things we provide, pay utility bills and more. We decided to break many of these functions apart because most work completely different than the others. So this is one Visual Studio solution with the WebUI and the database layer broken into two projects each. For instance, utility billing has a Utility.WebUI project and a Utility.Domain project. All DB/business logic is kept in the domain project.
The internal sites bridge the gap between the back-office system (IBM i) and the web database. Also will replace/enhance some of our older RPG programs. In theory they should use the exact same database logic that the external sites use because they access the same database right? What is the best way to reference these projects from a different solution? Should I just add a reference to the dll or should I import that project from the external application solution into the internal application solution?
This comes down to that we have two developers working on this project. Myself, I do most of the back-end coding. The other developer does most of the GUI coding. So we need to make sure that this project works on multiple workstations.
Does this make sense? Any thoughts?
Use the svn:externals property to reference the shared project into your project(s).
You have to choose between 1) referencing the directory containing the shared project's source code (i.e. where the csproj and cs files are located) or 2) referencing the directory containing the shared project's build output (assembly / dll).
I normally prefer method 1) since it makes modifications to the shared project's source code easier (you can make changes without having to open the shared project's solution in a second instance of Visual Studio). If you don't intend to make changes to the shared project often then method 2) might be better. It reduces compile time and prevents accidental modifications of the shared project's source code. Both methods are fine - matter of taste.
It is recommended for both methods that you version your shared project. i.e. create tags with version numbers and reference the tags, not the trunk. When a new version of the shared project comes out you can update the svn:externals property of your other project(s) with the new version number, run "svn update" to download the new version of the shared project, and recompile. This works especially well if you have a build server for the shared project that does the tagging for you automatically.
I think you can use a sort of "commons" solution that contains the common projects and then refer to these projects in you main solutions using SVN external pointing to the project folder in the SVN trunk.
Commons SVN repository must follow the suggested repository structure (trunk, branches, tags) to have always stable commons projects.
In this scenario you can consider to use a dependency management tool, such as NPanday or NDepend, where you must declare to which version of which assemblies every project depends on; using these tools you can have a local repository (such as Artifactory or Nexus) of binary assemblies to refer to, or choose to use SVN externals to refer directly to source code.

Resources