After exhausting all conceivable options over a matter of weeks, and after the drudgery of the back and forth with subpar IBM support, I have come to the conclusion that the only explanation for why my specific development environment fails to run a custom theme where other environments have no problems must have something to do with bad data in configurations contained in the embedded Derby database that comes packaged in the WebSphere Portal profile.
Google gives me no insights into the error I am running into, and I have confirmed the correctness of every single configuration file that even has the slightest chance of impacting the use of the Portal within a simple page. Any and all types of caching or logs have been disabled and purged and tracing reveals no additional information that is helpful to diagnosing the problem.
Are there any scripts within the installation of Portal that can be run to wipe and rebuild the embedded database? If not is the only option to scorch earth? The schema and data are cryptic to me, but if it is possible to diagnose the database for specific problems are there any tools that can do that or do I need detailed architectural knowledge to have any hope of finding bad data in this software?
I finally discovered what the problem is and it does indeed have nothing to do with a corrupt database, but in actuality is an inherent conflict with packaged WAR files having Subversion metadata information on the WAS Portal platform.
When running any WAR or EAR file in WAS or any WAS based product, make sure to exclude all Subversion metadata files and folders from the build. It apparently brings WAS and Portal to its knees.
Related
I have problems finding useful pentaho ETL tool information, is this tool dying?
What are the alternative tools/platform>
In short, yes, Hitachi Vantara aquired the suite and they aren't giving much love to it. They've just released the 9.2 version, but I don't have much faith in having a lot of improvements for the CE version, maybe some more for the paid version, but I don't think there's going to be much either, they killed the old forum and the new community they created is deserted.
For the Data Integration (ETL) part of the suite, you can go to apache-hop (https://hop.apache.org). A bunch of Kettle (the original name for the Pentaho Data Integration tool of Pentaho Suite) developers are actively modernizing and much improving the old code: it mostly works with Java 11, they are staying in Java 8 because of dependencies of the Beam plugins, the dependencies with old deprecated libraries are gone, for now there's not a lot of new functionality, just migration with some new features, but even if there isn't a 1.0 version yet, it is much advanced and some PDI/Kettle users are beginning to transition their production environment to this new tool.
There's a migration utility for the old jobs and transformations in PDI to workflows and pipelines in Hop. After applying the migration tool you're going to need to check and modify things yet, DB connections are possibly going to need some work afterwards, and a few of the old steps aren't available in Hop (Formula step is the one that most affect me) but in general the utility saves a lot of work.
New things in Hop:
Built with the idea of supporting project and environment configuration, so paths and information that are project/machine dependent work with different configurations, you just change the project or environment information and everything works.
Much better metadata injection support, PDI/Kettle still had a lot of steps with properties not available for metadata injection, and in the migration they have added it
Night mode
Much lighter, quicker start (PDI takes a long time just to initialize) and you can get rid of steps/plugins that you don't need if you just need a thin client to perform one task.
Hop-web
Is there any way to look for deployment event from Visual Studio in Azure? I would like to see when we deployed.
Previously, we had that option in Application Insight, now that they removed that feature I don't know where to look.
I would've expected to find that information, i.e., deployment/publish history in Azure Activity Logs or Deployments under the corresponding Resource group, but couldn't, and began digging further.
AFAIK, there isn't a straightforward way of tracking manual deployments on Azure. However, I can think of two workarounds:
Using VS Code: When a deployment is manually performed using VS Code using the Azure App Service extension, it becomes easier to track the deployment history and also see the current/active version, along with various other configuration options and logs. For example:
Using the KUDU console: Browse to the deployments page within the Kudu console for your app service (would be of the form: https://{app-name}.scm.azurewebsites.net/api/deployments), and you should see the history in JSON format. For example:
Note: I'd strongly advise using the same mode of deployment always, for maintaining consistency, and also building a CI/CD Pipeline for streamlining your deployments if this is not a test environment or a PoC.
Hope this helps!
I have scoured the internet to find out what I can on this, but have come away short. I need to know two things.
Firstly, is there a best practice for how TFS & Team Build should be used in a Development > Test > Production environment? I currently have my local VS get the latest files. Then I work on them & check them in. This creates a build that then pushes the published files into a location on the test server which IIS references. This creates my test environment. I wonder then what is the best practice for deploying this to a Live environment once testing is complete?
Secondly, off the back of the previous - my web application is connected to a database. So, the test version will point to a test database. But when this is then tested and put live, I will need that process to also make sure that any data connections are changed to the live database.
I am pretty much doing all this from scratch and am learning as I go along.
I'd suggest you to look at Microsoft Release Management since it's the tool that can help you to do exactly the things you mentioned. It can also be integrated with TFS.
In general, release management is:
the process of managing, planning, scheduling and controlling a
software build through different stages and environments; including
testing and deploying software releases.
Specifically, the tool that Microsoft offers would enable you to automate the release process, from development to production, keeping track of what and how everything is done when a particular stage is reached.
There's an MSDN article, Automate deployments with Release Management, that gives a good overview:
Basically, for each release path, you can define your own stages, each one made of a workflow (the so-called deployment sequence) containing the activities you want to perform using pre-defined machines from a pool.
It's possible to insert manual interventions/approvals if necessary, and the whole thing can be triggered automatically once your build is done.
Since you are pretty much in control of the actions performed on each machine in each stage (through the use of built-in or custom actions/components) it is also certainly possible to change configuration files, for example to test different scenarios, etc..
Another image to give you and idea of how it can be done:
My current task in the company is to implement IVY dependency management.
Now I hit the following libraries that I couldn't find in usual Maven repositories, such as http://mvnrepository.com:
com.ibm.mq.jar
com.ibm.mq.pcf.jar
...
and so forth (they are all with the prefix: com.ibm.mq).
I could found them on a separated website: http://www.java2s.com/
But it's not Maven compatible.
So, where could I found those? What's the best solution to overcome this?
I'm thinking, uploading them manually to the team nexus. But is this the usual procedure in such cases?
Thanks a lot in advance.
Best place to find those is directly from IBM. Please see this answer for the different versions of the WMQ Java/JMS client available.
As for the best way to package these, please be aware that if you want IBM to support them you need to install the client code rather than just bundling in the jar files. The reason IBM is reluctant to support non-standard installs should concern you as well if the app is to be installed in Production. The full client includes considerable additional functionality such as diagnostics, trace functions, crypto libs, JSSE, etc. In addition, it is the only install against which you can apply IBM's maintenance.
If you install the jars from a 3rd party site such as the one linked above, do you even know what version they are? Has any of the maintenance been applied? Have the latest patches been applied? Since IBM only distributes the full client, and OEMs are not authorized to distribute the jar files except as part of their application, any site offering the WMQ jar files is by definition pirating them.
I realize that requiring you to do the full client install is considered burdensome when you are used to being able to just grab some jars and go. On the other hand, if you don't need support then you might install the WMQ Client on a VM somewhere, keep it up to date and grab the jar files from there. That way you have a known-good set of files that are all in sync and to which you can apply maintenance.
If you'd like to suggest to IBM that they need a lighter-weight Java solution, feel free to raise the requirement (or vote on it if it already exists) at the IBM Request For Enhancement (RFE) Community.
This question came up on the development team I'm working with and we couldn't really get to a consensus:
Should changes to the database be part of the CI script?
Assuming that the application you are working with has a database involved. I think yes because that's the definition of integration. If you aren't including a portion of your application then you aren't really testing your integration. The counter-argument is that the CI server is the place to make sure your basic project setup works -- essentially building a virgin checkout of the latest version of your code.
Is there a "best practices" document for CI that would answer this question? Is this something that is debated among those who are passionate about CI?
Martin Fowler's opinion on it:
A common mistake is not to include everything in the automated build.
The build should include getting the database schema out of the
repository and firing it up in the execution environment.
All code, including DB schema and prepulated table values should both be subject to source control and continous integration. I have seen far to many projects where source control is used - but not on the DB. Instead there is a master database instance where everyone is doing there changes, on the same time. This makes it impossible to do branching and also makes it impossible to recreate an earlier state of the system.
I'm very fond of using Visual Studio 2010 Premium's functionality for database schema handling. It makes the database schema part of the project structure, having the master schema under source control. An fresh database can be created right out of the project. Upgrade scripts to lift existing databases to the new schema are automatically generated.
Doing change management properly for databases without VS2010 Premium or a similar tool would at best be painful - if possible at all. If you don't have that tool support I can understand your collegue that wants to keep the DB out of CI. If you have problems arguing for including the DB in CI, then maybe it is an option to first get a descen toolset for DB work? Once you have the right tools it is a natural step to include the DB in CI.
You have no continuous integration if you have no real integration. This means that all components needed to run your software must be part of CI, otherwise you have something just a bit more sophisticated than source control, but no real CI benefits.
Without database in CI, you can't roll back to specific version of an application and you can't run your test in real, always complete environment.
It is of course not an easy subject. In the project I work in we use alter scripts that needs to be checked in together with source code changes. These scripts are run on our test database to ensure not only the correctness of current build, but also that upgrading/downgrading one version up/down is possible and the process of update itself don't mess anything up. I believe this is the better solution than dropping and recreating whole database, it lets you to have the consistent path to upgrade the database step by step and allows you to use the database in some kind of test environment - with data, users etc.