Perforce - how to prevent "p4 client" from creating a client when the template form is not saved? - client

The Perforce documentation for p4 client <no args> states:
The p4 client command puts the client spec into a temporary file and
invokes the editor configured by the environment variable P4EDITOR.
For new workspaces, the client name defaults to the P4CLIENT
environment variable, if set, or to the current host name. Saving the
file creates or modifies the client spec.
What I am seeing on our network is that the client is created no matter what, even when I exit without saving.
Ex.
[cad_test_user#sws-cab9-0 ~]$ pwd
/home/cad_test_user
[cad_test_user#sws-cab9-0 ~]$ env | grep P4
P4EDITOR=
P4PORT=tcp:p4p:1666
P4DIFF=tkdiff
P4CONFIG=.p4config
P4IGNORE=.ignore
P4USER=cad_test_user
[cad_test_user#sws-cab9-0 ~]$ p4 clients | grep sws-cab9-0
[cad_test_user#sws-cab9-0 ~]$ p4 client
Client: sws-cab9-0
Owner: cad_test_user
Host: sws-cab9-0.aus5.mythic-ai.com
Client sws-cab9-0 saved.
Root: /home/cad_test_user
Options: noallwrite noclobber nocompress unlocked nomodtime normdir
SubmitOptions: submitunchanged
LineEnd: local
View:
<quit without save>
Client sws-cab9-0 saved.
[cad_test_user#sws-cab9-0 ~]$ p4 clients | grep sws-cab9-0
Client sws-cab9-0 2021/04/06 root /home/cad_test_user 'Created by cad_test_user. '
Now as another user outside of a .p4config hierarhchy, I get an unexpected value for %clientroot%:
[cad_test_user#sws-cab9-0 /]$ p4 -F %clientRoot% -ztag info
/home/cad_test_user
I am wondering if there is something wrong with our default settings; why is the client created and saved even without a write? Ideally, I'd want to manage the default specification to some degree, like:
synthesize the client name so that it is never the hostname, like c:$USER:foo
Not have a "Host:"
define the "Root:" to be somewhere personal
Not create the client unless the user does a write-quit!
Thanks for your answers!

Set up a trigger (a form-save trigger on the client form) that rejects a client which doesn't meet your criteria. It's hard to enforce #4 directly, but as long as at least one of your other criteria is something that requires the form to be edited, it's handled well enough indirectly.
Note that you can pair your form-save trigger with a form-out trigger that modifies the default client form -- you could for example replace Root with an obviously invalid field like --ENTER SOMETHING PERSONALIZED HERE-- and then make sure your form-save trigger rejects it. The Perforce sys admin guide has some nice simple example triggers, one of which demonstrates customizing client spec defaults: https://www.perforce.com/manuals/p4sag/Content/P4SAG/scripting.triggers.forms.out.html
On your criteria #2, I would recommend against this unless you're in an environment where it's commonplace for multiple host machines to share a single filesystem. The default Host guardrails are there to keep you from confusing yourself (and possibly losing data) by reusing a client spec in ways that throw the workspace state out of whack.

Related

Avoid mass e-mail notification in error analysis bash script

I am selecting error log details from a docker container and decide within a shell script, how and when to alert about the issue by discord and/or email.
Because I am receiving the email alerts too often with the same information in the email body, I want to implement the following two adjustments:
Fatal error log selection:
FATS="$(docker logs --since 24h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"
Email sent, in case FATS has some content:
swaks --from "$MAILFROM" --to "$MAILTO" --server "$MAILSERVER" --auth LOGIN --auth-user "$MAILUSER" --auth-password "$MAILPASS" --h-Subject "FATAL ERRORS FOUND" --body "$FATS" --silent "1"
How can I send the email only in the case, FATS has another content than the previous run of the script? I have thought about a hash about its content, which is stored and read in a text file. If the hash is the same than the previous script run, the email will be skipped.
Another option could be a local, temporary variable in the global user's bash profile, so that there is no file to be stored on the file system (to avoid read / writes).
How can I do that?
When you are writing a script for your monitoring, add functions for additional functionality, like:
logging all the alerts that have been send
make sure you don't send more than 1 alert each hour
consider sending warnings only during working hours
escalate a message when it fails N times without intermediate success
possible send an alert to different receivers (different email adresses or also to sms or teams)
make an interface for an operator so he can look back when something went wrong the first time.
When you have control which messages you send, it is easy to filter duplicate meassages (after changing --since).
I‘ve chosen the proposal of #ralf-dreager and reduced selection to 1d and 1h. Consequently, I‘ve changed my monitoring script to either go through the results of 1d or just 1h, without the need to select each time again and again. Huge performance improvement and no need to store anything else in a variable or on the file system.
FATS="$(docker logs --since 1h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"

gcloud cli failing to add record when contents start with dash

I'm working with the LetsEncrypt dns-01 challenge system which entails dynamically creating a TXT record in Google Cloud DNS with specific content, so LE can assert proof of ownership for generating a wildcard certificate (so I can't use http-01). The problem is sometimes LE tells me to create a TXT record that starts with a "-", for example -E_DFDFHJKF1783FSHDJ. I cannot get the gcloud cli to properly accept this data no matter what I do.
Example:
gcloud dns record-sets transaction start --zone=myzone
gcloud dns record-sets transaction add "-E_ASDFSDF" --ttl=30 --zone=myzone --name=test --type=TXT
gcloud dns record-sets transaction remove "-A_DSFKHSDF" --ttl=30 --zone=myzone --name=test2 --type=TXT
If you run those commands and inspect the resulting transaction.yaml you can see whether it properly contains the right string. If it did it correct, you should see something like:
- kind: dns#resourceRecordSet
name: test.
rrdatas:
- '"ASDFASDF"'
ttl: 30
type: TXT
I am executing this via Node's child_process, but I have the issue even if I execute it directly from bash, so Node isn't really meaningful issue at the moment. I've tried echoing the value in. I've tried setting an environment variable and using that in the string.
No matter what I do I get an error like the following:
ERROR: (gcloud.dns.record-sets.transaction.add) unrecognized arguments: -E_ASDFSDF
It turns out some characters need to be escaped in the CLI. I can confirm that the following works:
gcloud dns --project=myprojectid record-sets transaction add "\-test123" --name=test.mydomain.com. --ttl=300 --type=TXT --zone=myzoneid

Bash commands putting out extra information which results into issues with scripts

Okay, hopefully I can explain this correctly as I have no idea what's causing this or how to resolve this.
For some reason bash commands (on a CentOS 6.x server) are displaying more information than "normally" and that causes issues with certain scripts. I have no clue if there is a name for this, but hopefully someone knows a solution for this.
First example.
Correct / good server:
[root#goodserver ~]# vzctl enter 3567
entered into CT 3567
[root#example /]#
(this is the correct behaviour)
Incorrect / bad server:
[root#badserver /]# vzctl enter 3127
Entering CT
entered into CT 3127
Open /dev/pts/0
[root#example /]#
With the "bad" server it will display more information as usual, like:
Entering CT
Open /dev/pts/0
It's like it parsing extra information on what it's doing.
Ofcourse the above is purely something cosmetic, however with several bash scripts we use, these issues are really issues.
A part of the script we use, uses the following command (there are more, but this is mainly a example of what's wrong):
DOMAIN=`vzctl exec $VEID 'hostname -d'`
The result of the above information is parsed in /etc/named.conf.
On the GOOD server it would be added in the named.conf like this:
zone "example.com" {
type master;
file "example.com";
allow-transfer {
200.190.100.10;
200.190.101.10;
common-allow-transfer;
};
};
The above is correct.
On the BAD server it would be added in the named.conf like this:
zone "Executing command: hostname -d
example.com" {
type master;
file "Executing command: hostname -d
example.com";
allow-transfer {
200.190.100.10;
200.190.101.10;
common-allow-transfer;
};
};
So it's add stuff of the action it does, in this example "Executing command: hostname -d"
Another example here when I run the command on a good server and on the bad server.
Bad server:
[root#bad-server /]# DOMAIN=`vzctl exec 3333 'hostname -d'`
[root#bad-server /]# echo $DOMAIN
Executing command: hostname -d example.com
Good server:
[root#good-server ~]# DOMAIN=`vzctl exec 4444 'hostname -d'`
[root#good-server ~]# echo $DOMAIN
example.com
My knowledge is limited, but I have tried several things checking rsyslog and the grub.conf, but nothing seems out of the ordinary.
I have no clue why it's displaying the extra information.
Probably it's something simple / stupid, but I have been trying to solve this for hours now and I really have no clue...
So any help is really appreciated.
Added information:
Both servers use: kernel.printk = 7 4 1 7
(I don't know if that's useful)
Well (thanks to Aaron for pointing me in the right direction) I finally found the little culprit which was causing all the issues I experienced with this script (which worked for every other server, so no need to change that obviously).
The issues were caused by the VERBOSE leven set in vz.conf (located in /etc/vz/ directory). There is an option in there called "VERBOSE" and in my case it was set to 3.
According to OpenVZ's website it does the following:
Increments logging level up from the default. Can be used multiple times.
Default value is set to the value of VERBOSE parameter in the global
configuration file vz.conf(5), or to 0 if not set by VERBOSE parameter.
After I changed VERBOSE=3 to VERBOSE=0 my script worked fine once again (as it did for every other server). :-)
So a big shoutout to Aaron for pointing me in the right direction. The answer is easy when you know where to look!
Sorry to say, but I am kinda disappointed by ndim's reaction. This is the 2nd time he was very unhelpful and rude in his response after that. He clearly didn't read the issue I posted correctly. Oh well.
I would make sure to properly parse the output of the command. In this case, we are only interested in lines of the form
entered into CT 12345
One way of doing this would be to pipe everything through sed and having sed print only the number when the line looks as above (untested, and I always forget which braces/brackets/parens need a backslash in front of them):
whateverthecommand | sed -n 's/^entered into CT ([0-9]{1,})$/\1/p'

Consuming function module with SAP Netweaver RFC SDK in Bash

I'm trying to make a request to a function in a SAP RFC server hosted at 10.123.231.123 with user myuser, password mypass, sysnr 00, client 076, language E. The name of the function is My_Function_Nm with params: string Alternative, string Date, string Name.
I use the command line:
/usr/sap/nwrfcsdk/bin/startrfc -h 10.123.231.123 -s 00 -u myuser -p mypass -c 076 -l en -F My_Function_Nm
But it always shows me the help instructions.
I guess I'm not specifying the -E pathname=edifile, and it's because i don't know how to create a EDI File to include the parameters values to the specified function. Maybe someone can help me on how to create this file and how to correctly invoke startrfc to consume from this function?
Thanks in advance.
If you actually check the help text the problem shows, you should find the following passages:
RFC connection options:
[...]
-2 SNA mode on.
You must set this if you want to connect to R/2.
[...]
-3 R/3 mode on.
You must set this if you want to connect to R/3.
Apparently you forgot to specify -3...
You should use sapnwrfc.ini which will store your connection parameters, and it should be places in the same directory as client program.
Sample file for your app should be following:
DEST=TST1
ASHOST=10.123.231.123
USER=myuser
PASSWD=mypass
SYSNR=076
RFC_TRACE=0
Documentation on using this file is here.
For calling the function you must create Bash-script, but better to use Python script.

What gems do you recommend to use for this kind of automation?

I have to create a script to manage maintenance pages server for my hosting company.
I will need to do a CLI interface that would act like this (example scenario) :
(here, let's suppose that mcli is the name of the script, 1.1.1.1 the original server address (that host the website, www.exemple.com)
Here I just create the loopback interface on the maintenance server with the original ip address and create the nginx site-specific config file in sites-enabled
$ mcli register www.exemple.com 1.1.1.1
[DEBUG] Adding IP 1.1.1.1 to new loopback interface lo:001001001001
[WARNING] No root directory specified, setting default maintenance page.
[DEBUG] Registering www.exemple.com maintenance page and reloading Nginx: OK
Then when I want to enable the maintenance page and completely shutdown the website:
$ mcli maintenance www.exemple.com
[DEBUG] Connecting to router with SSH: OK
[DEBUG] Setting new route to 1.1.1.1 to maintenance server: OK
[DEBUG] Writing configuration: Ok
Then removing the maintenance page:
$ mcli nomaintenance www.exemple.com
[DEBUG] Connecting to router with SSH: OK
[DEBUG] Removing route to 1.1.1.1: Ok
[DEBUG] Writing configuration: Ok
And I would need a function to see the actual states of the websites
$ mcli list
+------------------+-----------------+------------------+
| Site Name | Server I.P | Maintenance mode |
+------------------+-----------------+------------------+
| www.example.com | 1.1.1.1 | Enabled |
| www.example.org | 1.1.1.2 | Disabled |
+------------------+-----------------+------------------+
$ mcli show www.example.org
Site Name: www.example.org
Server I.P: 1.1.1.1
Maintenance Mode: Enabled
Root Directory : /var/www/maintenance/default/
But I never did this kind of scripting with Ruby. What gems do you recommend for this kind of things ? For command line parsing ? Column/Colorized output ? SSH connection (needed to connect to cisco routers)
Do you recommend me to use a local database (sqlite) to store meta datas (Stages changes, actual states) or do you recommend me to compute on the fly by analyzing nginx/interfaces configuration files and using syslog for monitoring changes done with this script ?
This script will be used at first time for a massive datacenter physical migration, and next for standard usages for scheduled downtimes.
Thank you
First of all, I'd recommend you get a copy of Build awesome command-line applications in Ruby.
That said, you might want to check
GLI command line parsing like git
OptionParser command line parsing
Personally, I'd go for the SQLite approach for storing data, but I'm biased (having a strong SQL background).
Thor is a good gem for handling CLI options. It allows this type of organization in your script:
class Maintenance < Thor
desc "maintenance", "put up maintenance page"
method_option :switch, :aliases => '-s', :type => 'string'
#The method name is the name of the task that would be run => mcli maintenance
def maintenance
#do stuff
end
no_tasks do
#methods that you don't want cli tasks for go here
end
end
Maintenance.start
I don't really have any good suggestions for column/colorized output though.
I definitely recommend using some kind of a database to store states though. Maybe not sqlite, I would probably opt for maybe a redis database that stores key/value pairs with the information you are looking for.
We have similar task. I use next architecture
Small application (C) what generate config file
Add nginx init.d script new switch update_clusters. This script will restart nginx only if config file is changed
update_clusters() {
${CONF_GEN} --outfile=/tmp/nginx_clusters.conf
RETVAL=$?
if [[ "$RETVAL" != "0" ]]; then
return 5
fi
if ! diff ${CLUSTER_CONF_FILE} /tmp/nginx_clusters.conf > /dev/null; then
echo "Cluster configuration changed. Reload service"
mv -f /tmp/nginx_clusters.conf ${CLUSTER_CONF_FILE}
reload
fi
}
Set of bash scripts to add records to database.
Web console to add/modify/delete records in database (extjs+nginx module)

Resources