How do I use setmqauth to turn off setall - ibm-mq

I am on Linux and am trying to test context authority, where I am/am not allowed up update applname in the mqmd.
With 'testuser' I can do this using -setall or +setall
setmqaut -m QMA -n CP0000 -t queue -p testuser -setall
With my main userid ( in group mqm) I am always allowed to do this, and cannot make it fail.
dmpmqaut -m QMA -n CP0000 -t queue -p colinpaice -e
gives
profile: CP0000
object type: queue
entity: mqm
entity type: group
authority: get browse put inq set dlt chg dsp passid passall setid clr
- - - - - - - -
profile: CP0000
object type: queue
entity: colinpaice
entity type: group
authority: get browse inq set dlt chg dsp passid passall setid clr
- - - - - - - -
profile: #class
object type: queue
entity: colinpaice
entity type: group
authority: crt
- - - - - - - -
profile: #class
object type: queue
entity: mqm
entity type: group
authority: crt*
And there is no setall specified.
Even if I issue
setmqaut -m QMA -n CP0000 -t queue -p colinpaice -setall
then
dspmqaut -m QMA -n CP0000 -t queue -p colinpaice
has setall in the list.
Im sure this must be something obvious - but I cannot see it.
dspmqaut -m QMA -t qmgr -p colinpaice
does not have setall in the list.
Is there another switch I need to toggle?

I found how to do it...
It looks like being in mqm group trumps other checks, and gives you authority to do anything.
I removed my userid from group mqm, and used the runmqsc command refresh security and the request to use applname worked as expected ( I could turn enable/disable using setmqauth +setall|-setall )
Adding my id back into the group and using the refresh security again gave me my original behavior.

Related

Unable to put Vault UI in https

I try to run Vault with a CRC OpenShift 4.7 and helm3 but I've some problems when I try to enable the UI in https.
Add hashicorp repo :
helm repo add hashicorp https://helm.releases.hashicorp.com
Install the latest version of vault :
[[tim#localhost config]]$ helm install vault hashicorp/vault \
> --namespace vault-project \
> --set "global.openshift=true" \
> --set "server.dev.enabled=true"
Then I run oc get pods
[tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 0/1 Running 0 48m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 6h9m
I run an interactive shell session with the vault-0 pod :
oc rsh vault-project-0
Then I initialize Vault :
/ $ vault operator init --tls-skip-verify -key-shares=1 -key-threshold=1
Unseal Key 1: iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs=
Initial Root Token: s.xVb0DvIMQRYam7oS2C0ZsHBC
Vault initialized with 1 key shares and a key threshold of 1. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 1 of these keys to unseal it
before it can start servicing requests.
Vault does not store the generated master key. Without at least 1 key to
reconstruct the master key, Vault will remain permanently sealed!
It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.
Export the token :
export VAULT_TOKEN=s.xVb0DvIMQRYam7oS2C0ZsHBC
Unseal Vault :
/ $ vault operator unseal --tls-skip-verify iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs=
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.6.2
Storage Type file
Cluster Name vault-cluster-21448fb0
Cluster ID e4d4649f-2187-4682-fbcb-4fc175d20a6b
HA Enabled false
I check the pods :
[tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 1/1 Running 0 35m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 35m
 
I'm able to get the UI without https :
In the OpenShift console, I switch to the Administrator mode and this is what I've done :
Networking part
- Routes > Create routes
Name : vault-route
Hostname : 192.168.130.11
Path :
Service : vault
Target Port : 8200 -> 8200 (TCP)
Now, if I check the URL : http://192.168.130.11/ui :
The UI is available.
 
In order to enable the https, I've followed the step here :
https://www.vaultproject.io/docs/platform/k8s/helm/examples/standalone-tls
But I've change the K8S commands for the OpenShift commands
# SERVICE is the name of the Vault service in Kubernetes.
# It does not have to match the actual running service, though it may help for consistency.
SERVICE=vault-server-tls
# NAMESPACE where the Vault service is running.
NAMESPACE=vault-project
# SECRET_NAME to create in the Kubernetes secrets store.
SECRET_NAME=vault-server-tls
# TMPDIR is a temporary working directory.
TMPDIR=/**tmp**
Then :
openssl genrsa -out ${TMPDIR}/vault.key 2048
Then create the csr.conf file :
[tim#localhost tmp]$ cat csr.conf
[req]
default_bits = 4096
default_md = sha256
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = #alt_names
[alt_names]
DNS.1 = vault-project
DNS.2 = vault-project.vault-project
DNS.3 = *apps-crc.testing
DNS.4 = *api.crc.testing
IP.1 = 127.0.0.1
Create the CSR :
openssl req -new -key': openssl req -new -key ${TMPDIR}/vault.key -subj "/CN=${SERVICE}.${NAMESPACE}.apps-crc.testing" -out ${TMPDIR}/server.csr -config ${TMPDIR}/csr.conf
Create the file ** csr.yaml :
$ export CSR_NAME=vault-csr
$ cat <<EOF >${TMPDIR}/csr.yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: ${CSR_NAME}
spec:
groups:
- system:authenticated
request: $(cat ${TMPDIR}/server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Send the CSR to OpenShfit :
oc create -f ${TMPDIR}/csr.yaml
Approve CSR :
oc adm certificate approve ${CSR_NAME}
Retrieve the certificate :
serverCert=$(oc get csr ${CSR_NAME} -o jsonpath='{.status.certificate}')
Write the certificate out to a file :
echo "${serverCert}" | openssl base64 -d -A -out ${TMPDIR}/vault.crt
Retrieve Openshift CA :
oc config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 -d > ${TMPDIR}/vault.ca
Store the key, cert, and OpenShift CA into Kubernetes secrets :
oc create secret generic ${SECRET_NAME} \
--namespace ${NAMESPACE} \
--from-file=vault.key=/home/vault/certs/vault.key \
--from-file=vault.crt=/home/vault/certs//vault.crt \
--from-file=vault.ca=/home/vault/certs/vault.ca
The command oc get secret | grep vault :
NAME TYPE DATA AGE
vault-server-tls Opaque 3 4h15m
Edit my vault-config with the oc edit cm vault-config command:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
extraconfig-from-values.hcl: |-
disable_mlock = true
ui = true
listener "tcp" {
tls_cert_file = "/vault/certs/vault.crt"
tls_key_file = "/vault/certs/vault.key"
tls_client_ca_file = "/vault/certs/vault.ca"
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "file" {
path = "/vault/data"
}
kind: ConfigMap
metadata:
creationTimestamp: "2021-03-15T13:47:24Z"
name: vault-config
namespace: vault-project
resourceVersion: "396958"
selfLink: /api/v1/namespaces/vault-project/configmaps/vault-config
uid: 844603a1-b529-4e33-9d58-20525ea7bff
Edit the VolumeMounst, volumes and ADDR parts my statefulset :
volumeMounts:
- mountPath: /home/vault
name: home
- mountPath: /vault/certs
name: certs
volumes:
- configMap:
defaultMode: 420
name: vault-config
name: config
- emptyDir: {}
name: home
- name: certs
secret:
defaultMode: 420
secretName: vault-server-tls
name: VAULT_ADDR
value: https://127.0.0.1:8200
I delete my pods in order to take into account all my changes
oc delete pods vault-project-0
And...
tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 0/1 Running 0 48m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 6h9m
vault-project-0 is on 0/1 but running. If I describe the pods :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 1s (x6 over 26s) kubelet Readiness probe failed: Error checking seal status: Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client
If think that I've missed something but I don't know what...
Someone to tell me how to enable https for the vault UI with openshift ?

Auditbeat not picking up authentication events in CentOs 7

I am trying to ship the authentication related of my CentOS 7 to Elasticsearch. Strangely I am not getting any authentication events.
When I ran the debug command auditbeat -c auditbeat.conf -d -e "*" , I found something like below:
{
"#timestamp": "2019-01-15T11:54:37.246Z",
"#metadata": {
"beat": "auditbeat",
"type": "doc",
"version": "6.4.0"
},
"error": {
"message": "failed to set audit PID. An audit process is already running (PID 68504)"
},
"beat": {
"name": "env-cs-westus-devtest-66-csos-logs-es-master-0",
"hostname": "env-cs-westus-devtest-66-csos-logs-es-master-0",
"version": "6.4.0"
},
"host": {
"name": "env-cs-westus-devtest-66-csos-logs-es-master-0"
},
"event": {
"module": "auditd"
}
}
Also there was an error line like below:
Failure receiving audit events {"error": "failed to set audit PID. An audit process is already running (PID 68504)"}
Machine details
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Audibeat Configuration File
#================================ General ======================================
fields_under_root: False
queue:
mem:
events: 4096
flush:
min_events: 2048
timeout: 1s
max_procs: 1
max_start_delay: 10s
#================================= Paths ======================================
path:
home: "/usr/share/auditbeat"
config: "/etc/auditbeat"
data: "/var/lib/auditbeat"
logs: "/var/log/auditbeat/auditbeat.log"
#============================ Config Reloading ================================
config:
modules:
path: ${path.config}/conf.d/*.yml
reload:
period: 10s
enabled: False
#========================== Modules configuration =============================
auditbeat.modules:
#----------------------------- Auditd module -----------------------------------
- module: auditd
resolve_ids: True
failure_mode: silent
backlog_limit: 8196
rate_limit: 0
include_raw_message: True
include_warnings: True
audit_rules: |
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
-a always,exit -F dir=/home -F uid=0 -F auid>=1000 -F auid!=4294967295 -C auid!=obj_uid -F key=power-abuse
-a always,exit -F arch=b64 -S setuid -F a0=0 -F exe=/usr/bin/su -F key=elevated-privs
-a always,exit -F arch=b32 -S setuid -F a0=0 -F exe=/usr/bin/su -F key=elevated-privs
-a always,exit -F arch=b64 -S setresuid -F a0=0 -F exe=/usr/bin/sudo -F key=elevated-privs
-a always,exit -F arch=b32 -S setresuid -F a0=0 -F exe=/usr/bin/sudo -F key=elevated-privs
-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -F key=elevated-privs
-a always,exit -F arch=b32 -S execve -C uid!=euid -F euid=0 -F key=elevated-privs
#----------------------------- File Integrity module -----------------------------------
- module: file_integrity
paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
- /home/jenkins
exclude_files:
- (?i)\.sw[nop]$
- ~$
- /\.git($|/)
scan_at_start: True
scan_rate_per_sec: 50 MiB
max_file_size: 100 MiB
hash_types: [sha1]
recursive: False
#================================ Outputs ======================================
#-------------------------- Elasticsearch output -------------------------------
output.elasticsearch:
enabled: True
hosts:
- x.x.x:9200
compression_level: 0
protocol: "http"
worker: 1
bulk_max_size: 50
timeout: 90
#================================ Logging ======================================
logging:
level: "info"
selectors: ["*"]
to_syslog: False
to_eventlog: False
metrics:
enabled: True
period: 30s
to_files: True
files:
path: /var/log/auditbeat
name: "auditbeat"
rotateeverybytes: 10485760
keepfiles: 7
permissions: 0600
json: False
Version of Auditbeat
auditbeat version 6.4.0 (amd64), libbeat 6.4.0
Have anyone faced a similar issue and got a resolution?.
Note: this configuration for auditbeat successfully captures authentication events for Ubuntu
So I posted the same in the Elastic beats forum and got a solution. You can find the same here
As per their suggestion, turning off the auditd service would allow Audit events to be captured by Audibeat. I tried the same and it worked for me. But I am not sure of the implications of turning the auditd off. So I might switch to a Filebeat based solution.

Email sending error in elastalert. SMTPSenderRefused: (530, '5.5.1 Authentication Required)

i got gmail authentication error. my config and error message as below
I already allowed less secure apps in gmail.
Config.yaml file email section as below
name: frequency_rule
type: frequency
index: security
num_events: 50
timeframe:
days: 1
filter:
- term:
host.keyword : "azure-2"
alert:
- email
email:
"to_address#gmail.com"
smtp_host: "smtp.gmail.com"
smtp_port: "465"
smtp_ssl: true
from_addr: "from_address#gmail.com"
user: "from_address#gmail.com"
password: "password"
Error message as below
PS C:\Users\smiforce-2ndPC\Downloads\Compressed\elastalert-master\elastalert-master> python -m elastalert.elastalert --verbose --config ./config.yaml --rule ./alert_rules/frequency4.yaml
INFO:elastalert:Starting up
INFO:elastalert:Queried rule frequency_rule4 from 2017-11-20 09:48 Central Standard Time to 2017-11-21 09:48 Central Standard Time: 24 / 24 hits
ERROR:root:Traceback (most recent call last):
File "C:\Users\smiforce-2ndPC\Downloads\Compressed\elastalert-master\elastalert-master\elastalert\elastalert.py", line 1246, in alert
return self.send_alert(matches, rule, alert_time=alert_time, retried=retried)
File "C:\Users\smiforce-2ndPC\Downloads\Compressed\elastalert-master\elastalert-master\elastalert\elastalert.py", line 1326, in send_alert
alert.alert(matches)
File "elastalert\alerts.py", line 451, in alert
self.smtp.sendmail(self.from_addr, to_addr, email_msg.as_string())
File "C:\Python27\lib\smtplib.py", line 737, in sendmail
raise SMTPSenderRefused(code, resp, from_addr)
SMTPSenderRefused: (530, '5.5.1 Authentication Required. Learn more at\n5.5.1 https://support.google.com/mail/?p=WantAuthError l4sm636961ioc.69 - gsmtp', 'from_address#gmail.com')
ERROR:root:Uncaught exception running rule frequency_rule4: (530, '5.5.1 Authentication Required. Learn more at\n5.5.1 https://support.google.com/mail/?p=WantAuthError l4sm636961ioc.69 - gsmt
The user and password fields should not be stored in the same config.yaml file but in another file which is referenced in config.yaml.
For instance, create another file named auth.yaml and add the user and password configuration into it:
user: "from_address#gmail.com"
password: "password"
Then in config.yaml you can reference that file using this setting:
smtp_auth_file: "/path/to/auth.yaml"

Returned lines overlap when using ncat pipe in a shell script

I am using /bin/sh to write a shell script that fetches data from a telnet call via ncat, like so:
echo 'transport info' | ncat hostname 9993
When I do this from a command line the output looks like this:
500 connection info:
protocol version: 1.3
model: HyperDeck Studio
208 transport info:
status: record
speed: 0
slot id: 1
clip id: none
display timecode: 00:28:01:27
timecode: 00:00:00:00
video format: 1080i5994
loop: false
But when I do it in a shell script /bin/sh it looks like this:
loop: falseat: 1080i599443:15
Here is my sample script:
#!/bin/sh
FOO="$( echo "transport info" | ncat -C hostname 9993 )"
echo $FOO
Anyone know why this happens?
#!/bin/sh
echo "transport info" | ncat -C hostname 9993 | dos2unix > /tmp/test.txt
cat /tmp/test.txt
Output:
500 connection info:
protocol version: 1.3
model: HyperDeck Studio
208 transport info:
status: preview
speed: 0
slot id: none
clip id: none
display timecode: 01:10:01:01
timecode: 00:00:00:00
video format: 1080i5994
loop: false

Bash script makes connection using FreeTDS, interacts, doesn't exit (just hangs)

I'm using FreeTDS in a script to insert records into a MSSQL database. TheUSEandINSERTcommands work, but theexitcommand doesn't and it hangs. I've tried redirectingstdoutbutcatcomplains. I suppose I will use Expect otherwise. Meh. Thanks.
echo -e "USE db\nGO\nINSERT INTO db_table (id, data, meta)\nVALUES (1, 'data', 'meta')\nGO\nexit" > tempfile
cat tempfile - | tsql -H 10.10.10.10 -p 1433 -U user -P pass
Did you mean to do this: cat tempfile -? It means that it will wait for you to press Ctrl+D, because it is trying to read from standard input as well.
If not, remove the -.
Also, as Ignacio suggests, you could write it more cleanly as a heredoc:
tsql -H 10.10.10.10 -p 1433 -U user -P pass <<EOF
USE db
GO
INSERT INTO db_table (id, data, meta)
VALUES (1, 'data', 'meta')
GO
exit
EOF
Or just do the echo with literal newlines rather than \n:
echo "
USE db
GO
INSERT INTO db_table (id, data, meta)
VALUES (1, 'data', 'meta')
GO
exit
" > tempfile
and then run it by using standard input redirection (<) like this:
tsql -H 10.10.10.10 -p 1433 -U user -P pass < tempfile

Resources