Our Jenkins job downloads some code from a Perforce server, using a pre-defined workspace. It
sometimes fails with the following error message:
Client 'xxxx' can only be used from host 'yyyy'.
When I look at the workspace ("client" is an obsolete name for workspace), I see that its settings don't mention host yyyy at all.
I suspect that people (or unknown scripts) change the workspace's settings, do some work and then change them back. If a Jenkins job is scheduled to run during that time, it fails.
How can I determine if I guessed correctly? Are there any logs on the Perforce server which report workspace changes? Maybe some server setting to record all changes to workspaces?
Workspace settings look like something I should be able to track and/or revert using version control; is this really the case?
First and foremost, you should set the locked option on the client if you don't want anyone else messing with it (and set its Owner to be the user who runs the Jenkins job, and ensure that this user is password-protected so that nobody else can impersonate Jenkins).
To track changes to client specs, you can set up a spec depot (just create a depot with Type: spec). This will cause every spec update to be saved in that depot as a revision of a text file, e.g. client xxxx will correspond to a text file called //spec/client/xxxx. You can run normal commands like p4 annotate on that file to see its change history, and you can pipe old versions of the file into the current client spec by doing, e.g.:
p4 print -q //spec/client/xxxx | p4 client -i
But again, first and foremost, persistent clients that automation depends on should simply be locked so that they can't be sabotaged (intentionally or unwittingly) by other users.
Related
Recently i've came to an issue to configure Last Participant Support on deployed application. I've found some old post about that:
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014090728
On server itself i found how to do it. But with jython or wsadmin commands im not able to find how to do it on application itself.
But it does not help for me. Any ideas?
There is no command assistance available for the action of changing last participant support from the admin console which typically implies there is no scripting command associated the action. And there doesn't appear to be an wsadmin AdminApp command to modify the property. Looking at config repo changes made as a result of the admin console action, the IBM Programming Model Extensions (PME) deployment descriptor file "ibm-application-ext-pme.xmi" for an application is created/modified by the action.
If possible, the best long-term solution would be to use a tool like RAD to generate that extensions file when packaging the application because if you need to redeploy the app, your config changes wouldn't get overridden. If you can't mod the app, you can script the addition of an PME descriptor file in each of the desired apps with the knowledge that redeploying the app will overwrite your changes. The changes can be made by doing something along the lines of:
1) create a text file named ibm-application-ext-pme.xmi with contents similar to this:
<pmeext:PMEApplicationExtension xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:pmeext="http://www.ibm.com/websphere/appserver/schemas/5.0/pmeext.xmi" xmi:id="PMEApplicationExtension_1559836881290">
<lastParticipantSupportExtension xmi:id="LastParticipantSupportExtension_1559836881292" acceptHeuristicHazard="false"/>
</pmeext:PMEApplicationExtension>
2) in wsadmin or your jython script do the following (note in this example the xmi file you created is in the current directory, if not, include the full path to it in the createDocument command) :
deployUri = "cells/<your_cell_name>/applications/<your_app_name>.ear/deployments/<your_app_name>/META-INF/ibm-application-ext-pme.xmi"
AdminConfig.createDocument(deployUri, "ibm-application-ext-pme.xmi")
AdminConfig.save()
3) restart the server
I have some Windows Server 2016 instances on GCE (for Jenkins agents).
I'm wondering what is the best/good practice when it comes to computer name.
Currently, when I want to create a new node, I clone an instance (create images from disks + create template + create instance from template).
On this clone, I change the computer name (in Windows) so that it has the same name as on GCE. Is it useful? recommended? bad? needed?
I know that the name of the Jenkins node needs to be the same as the name of the GCE instance (to be picked up easily). However, I don't think the Windows computer name matters.
So, should I pick an identical generic name for all of them? A prefix+random generated name? Continue with the instance=computer=node name?
The node name that I use in Jenkins is always retrieved from env.NODE_NAME (when needed), so that should not break any pipeline. Not sure thought, as I may be missing something (internal to Jenkins).
Bonus question: After cloning, I have to do some modifications on the clone for Perforce (p4) to work.
I temporarily set some env variables
I duplicate the workspace: p4 client -t prefix-buildX-suffix prefix-buildY-suffix
I setup the stream (not sure if doable in one step)
Then regenerate the list of files: p4 sync -k <root_folder_to_be_generated>/...#YYYY/MM/DD
So, here also there's a name prefix-buildY-suffix which is the same as the one from the instance=computer=node (buildY). It may be a separate question, but as it's still from the same context, I'm putting it here: should I recreate a new workspace all the time? Knowing that it's on several machines, I'd say yes. Otherwise, I "imagine" that p4 would have contradictory information about the state of this workspace. So, here also, I currently need to customize the name. So, even if I make the Windows computer name generic, I would still need to customize the p4 workspace name, wouldn't I?
Jenkins must have the same computer name as the one on the network.
So, all three names must be identical.
I have installed OEM 13c and deployed a couple of agents and want to test out the Log File Monitoring utility. I have enabled it and added a log file to monitor.
When I go and test it out, it does not show any alerts when they are put into the Log File. On the agent server, I have tailed the file and see the messages coming into the log file.
Does anyone have experience adding log files to OEM? I could have configured it wrong. Or is there any troubleshooting steps that I can follow to see if the server is even contacting the agent for reading the log file. Status of the agent is good with no incidents.
Without access to the system, it would be difficult to tell you the exact cause of this issue. However, I can list a few potential causes of this issue that I have experienced personally:
Permissions. The Oracle Enterprise Manager Agent is very convoluted when it comes to system permissions within a remote server. The agent can be owned and run as any number of users but during metric evaluation, may also need sudo or pam-authentication permissions to access certain entities on the server. Depending on the authentication profiles on that server, this could be the cause of your issue. There are ways to grant the agent access through the PAM stack if that is necessary.
Syntax. The wildcard syntax in the OEM GUI can be a little confusing as well. I would play with the wildcard elements a bit on the "String" component to ensure that it isn't as simple as adding wildcards to the beginning and end of the string. Without diving into the binaries of the agent plugins, it is difficult to assess exactly how the agent is evaluating this particular metric
One suggestion I would have is to go through the agent commands. There are specific commands you can run to manually force an agent to evaluate a particular metric for a particular target. This can allow you to manually trigger the metric collection locally on the server and evaluate what exactly is being performed at the agent level.
On the system I was running (12c) the command was as follows:
emctl control agent runCollection <hostname>:host host_storage
I built an Analysis that displayed Results, error free. All is well.
Then, I added some filters to existing criteria sets. I also copied an existing criteria set, pasted it, and modified it's filters. When I try to display results, I see a View Display Error.
I’d like to revert back to that earlier functional version of the analyses, hopefully without manually undoing the all of filter & criteria changes I made since then.
If you’ve seen a feature like this, I’d like to hear about it!
Micah-
Great question. There are many times in the past when we wished we had some simple SCM on the Oracle BI Web Catalog. There is currently no "out of the box" source control for the web catalog, but some simple work-arounds do exist.
If you have access server side where the web catalog lives you can start with the following approach.
Oracle BI Web Catalog Version Control Using GIT Server Side with CRON Job:
Make a backup of your web catalog!
Create a GIT Repository in the web cat
base directory where the root dir and root.atr file exist.
Initial commit eveything. ( git add -A; git commit -a -m
"initial commit"; git push )
Setup a CRON job to run a script Hourly,
Minutely, etc that will tell GIT to auto commit any
adds/deletes/modifications to your GIT repository. ( git add -A; git
commit -a -m "auto_commit_$(date +"%Y-%m-%d_%T")"; git push )
Here are the issues with this approach:
If the CRON runs hourly, and an Analysis changes 3 times in the hour
you'll be missing some versions in there.
No actual user submitted commit messages.
Object details such as the Objects pretty "Name" (caption), Description (user populated on Save Dialog), ACLs, and object custom properties are stored in a binary file format. These files have the .atr extension. The good news though is that the actual object definition is stored in a plain text file in XML (Without the .atr).
Take this as a baseline, and build upon it. Here is how you could step it up!
Use incron or other inotify based file monitoring such as ruby
based guard. Using this approach you could commit nearly
instantly anytime a user saves an object and the BI server updates
the file system.
Along with inotify, you could leverage the BI Soap API to retrieve the actual object details such as Description. This would allow you to create meaningfull commit messages. Or, parse the binary .atr file and pull the info out. Here are some good links to learn more about Web Cat ATR files: Link (Keep in mind this links are discussing OBI 10g. The binary format for 11G has changed slightly.)
Can I in svn hooks for Windows to write a command which relocate automatically some folders to another location in repository?
Hook must run at server
For example: Users commit files in his working copy (C:svnworkingcopy\dev)
At server will run a hook and automatically relocated or copy this files into another folder of repository.(https://svnserver/onlyread)
Where this user have permission to read only.
Thnk !
svn switch --relocate a user's working copy with a hook script? Looks like you are confusing the terms. Nevertheless I advise you to check the following warning in SVNBook:
While hook scripts can do almost anything, there is one dimension in
which hook script authors should show restraint: do not modify a
commit transaction using hook scripts. While it might be tempting to
use hook scripts to automatically correct errors, shortcomings, or
policy violations present in the files being committed, doing so can
cause problems. Subversion keeps client-side caches of certain bits of
repository data, and if you change a commit transaction in this way,
those caches become indetectably stale. This inconsistency can lead to
surprising and unexpected behavior. Instead of modifying the
transaction, you should simply validate the transaction in the
pre-commit hook and reject the commit if it does not meet the desired
requirements. As a bonus, your users will learn the value of careful,
compliance-minded work habits.