I have some Windows Server 2016 instances on GCE (for Jenkins agents).
I'm wondering what is the best/good practice when it comes to computer name.
Currently, when I want to create a new node, I clone an instance (create images from disks + create template + create instance from template).
On this clone, I change the computer name (in Windows) so that it has the same name as on GCE. Is it useful? recommended? bad? needed?
I know that the name of the Jenkins node needs to be the same as the name of the GCE instance (to be picked up easily). However, I don't think the Windows computer name matters.
So, should I pick an identical generic name for all of them? A prefix+random generated name? Continue with the instance=computer=node name?
The node name that I use in Jenkins is always retrieved from env.NODE_NAME (when needed), so that should not break any pipeline. Not sure thought, as I may be missing something (internal to Jenkins).
Bonus question: After cloning, I have to do some modifications on the clone for Perforce (p4) to work.
I temporarily set some env variables
I duplicate the workspace: p4 client -t prefix-buildX-suffix prefix-buildY-suffix
I setup the stream (not sure if doable in one step)
Then regenerate the list of files: p4 sync -k <root_folder_to_be_generated>/...#YYYY/MM/DD
So, here also there's a name prefix-buildY-suffix which is the same as the one from the instance=computer=node (buildY). It may be a separate question, but as it's still from the same context, I'm putting it here: should I recreate a new workspace all the time? Knowing that it's on several machines, I'd say yes. Otherwise, I "imagine" that p4 would have contradictory information about the state of this workspace. So, here also, I currently need to customize the name. So, even if I make the Windows computer name generic, I would still need to customize the p4 workspace name, wouldn't I?
Jenkins must have the same computer name as the one on the network.
So, all three names must be identical.
Related
How do I change the name of the experiment? I tried to use dvc exp run -n to name the project then use git to push to github. However the experiment name is still SHA.
Tried: I tried to use dvc exp run -n to name the project then use git to push to github. However the experiment name is still SHA.
Expected experiment name on iterative studio interface to display the name
Actually happened: Github SHA value instead of the name
There are two types of experiments in DVC ecosystem that we need to distinguish and there are a few different approaches on naming them.
First, is what we sometimes call "ephemeral" experiments, those that do not create commits in your Git history unless you explicitly say so. They are described in this Get Started section. For each of those experiments a name is auto-generated (e.g. angry-upas in one of the examples from the doc) or you could use dvc exp run -n ... to pass a particular name to it.
Another way to create those experiments (+ send them to Studio) is to use DVC logger (DVCLive), e.g. it's described here. Those experiments will be visible in Studio with an auto-generated names (or a name that was provided when they were created).
Now, we have another type of an experiment - a commit. Something that someone decided to make persistent and share with the team via PR and/or via a regular commit. Those are presented in the image that is shared in the Question.
Since they are regular commits, the regular Git rules apply to them - they have hash, they have descriptions, and most relevant to this discussion is that they have tags. All this information will be reflected in Studio UI.
E.g. in this public example-get-started repo:
It think in your case, tags would be the most natural way to rename those. May be we can introduce a way to push exp name as a git tag along with the experiment when it's being saved. WDYT?
Let me know if that answers your question.
Our Jenkins job downloads some code from a Perforce server, using a pre-defined workspace. It
sometimes fails with the following error message:
Client 'xxxx' can only be used from host 'yyyy'.
When I look at the workspace ("client" is an obsolete name for workspace), I see that its settings don't mention host yyyy at all.
I suspect that people (or unknown scripts) change the workspace's settings, do some work and then change them back. If a Jenkins job is scheduled to run during that time, it fails.
How can I determine if I guessed correctly? Are there any logs on the Perforce server which report workspace changes? Maybe some server setting to record all changes to workspaces?
Workspace settings look like something I should be able to track and/or revert using version control; is this really the case?
First and foremost, you should set the locked option on the client if you don't want anyone else messing with it (and set its Owner to be the user who runs the Jenkins job, and ensure that this user is password-protected so that nobody else can impersonate Jenkins).
To track changes to client specs, you can set up a spec depot (just create a depot with Type: spec). This will cause every spec update to be saved in that depot as a revision of a text file, e.g. client xxxx will correspond to a text file called //spec/client/xxxx. You can run normal commands like p4 annotate on that file to see its change history, and you can pipe old versions of the file into the current client spec by doing, e.g.:
p4 print -q //spec/client/xxxx | p4 client -i
But again, first and foremost, persistent clients that automation depends on should simply be locked so that they can't be sabotaged (intentionally or unwittingly) by other users.
We have a number of (developer) existDb database servers, and some staging/production servers.
Each have their own configuration, that are slightly different.
We need to select which configuration to load and use in queries.
The configuration is to be stored in an XML file within the repository.
However, when syncing the content of the servers, a single burnt-in XML file is not sufficient, since it is overwritten during copying from the other server.
For this, we need the physical name of the actual database server.
The only function found, request:get-server-name that is not quite stable since a single eXist server can be accessed through a number of various (localhost, intranet or external) URLs. However, that leads to unnecessary duplication of the configuration, one for each external URL...
(Accessing some local files in the file system is not secure and not fast.)
How to get the physical name of the existDb server from XQuery?
I m sorry but I don't fully understand your question, are you talking about exist's default conf.xml or your own configuration file that you need to store in a VCS repo? Should the xquery be executed on one instance and trigger an event in all others, or just some, or...? Without some code it is difficult to see why and when something gets overwritten.
you could try console:jmx-token which does not vary depending on URL (at least it shouldn't)
Also you might find it much easier to use a docker based approach. Either with multiple instances coordinated via docker-compose or to keep the individual configs from not interfering with each other when moving from dev to staging to production https://github.com/duncdrum/exist-docker
If I understand correctly, you basically want to be able to get the hostname or the IP address of a server from XQuery. If the functions in the XQuery Request module are not doing as you wish, then another option would be to set a Java System Property when starting eXist-db. This system property could be the internal DNS name or IP of your server, for example: -Dour-server-name=server1.mydomain.com
From XQuery you could then read that Java System property using util:system-property("our-server-name").
I botched a DC's AD / DNS pretty bad over the course of several years (of learning experiences) to the point where I could no longer join or leave the domain with clients. I have a NAS that used to plug into AD via SMB and that is how all the users (my family) used to access their files.
I have recreated my infrastructure configuration from scratch using Windows 2016 using best practices this time around. Is there any way to easily migrate those permissions to users in a new domain/forest (that are equivalent in value to the old one)?
Could I possibly recreate the SIDs / GUIDs of the new users to match the old? I'm assuming no because they have a Windows installation-unique generated string in there.
Could I possibly do this from the NAS side without having to go through each individual's files to change ownership?
Thank you.
One tool you can use to translate permissions from original SIDs to new SIDs is Microsoft's SubInACL
SubInACL will need from you information which old SID corresponds to which new SID or username and execute translation for all data on NAS server. For example like this
subinacl /subdirectories "Z:\*.*" /replace=S-1-5-1-2-3-4-5=NEWDOMAIN\newuser
How long it will take for translation to complete depends on number of files and folders, if it's tens of thousands expect hours.
There are also other tools like SetACL or PowerShell cmdlets Get-Acl/Set-Acl
You cannot recreate objects with original SIDs and GUIDs unless you're doing restore of the AD infrastructure or cloning/migrating original identities into new ones with original SID in sidHistory attribute.
So if you're already running domain controller with NAS in newly created forest and old one suffered from issues you wanted fixed that option would be probably much more painful and it's easier to go for SID translation.
When setting up a new Hudson/Jenkins instance i run into the problem that i have to manually provide all the email addresses for the scm users.
We are using subversion and i can't generate the mail addresses from the usernames. I got a mapping but i found no way to copy / edit that without making use of the gui. With 20+ users that gets boring and i'd like to have just edit a file or something.
Maybe i'm missing some trivial thing like a scmusers.xml (which totally would do the job) ?
I've got 2 solutions so far:
The users are stored in users/USERNAME/config.xml could be versioned / updated / etc.
Makeing use of the RegEx+Email+Plugin, create one rule per user and version that file.
With 20+ users, setting up a list for the scm users is the way to go. Then when folks add/leave the group, you only have to edit the mailing list instead of the Hudson jobs. Also depending on your mailing list software, folks might be able to add and drop themselves from the list which would save you the time of maintaining it yourself in Hudson.
You might also want to look into the alias support of whatever email server your Hudson server is using. Let Hudson send out the emails it wants to using the SVN usernames, but then define aliases in your /etc/aliases file (or equivalent for your email server) that map the SVN usernames to the actual email addresses.