accessing F5 load balancer using unix script - shell

I'm new to F5 load balancers. Is there anyway I can stop/start servers in the F5 pool using unix scripts?
Thanks,
Santosh

If you are going to stop/start pool members (nodes) directly on the BIG-IP, you can use the TMSH commands within the script. In this case:
Force Node Offline: >tmsh modify /ltm node <nodename> state user-down session user-disabled - This will prevent new connections from occuring but will not drop existing connections (will not drain)
Delete Existing Connections: >tmsh delete /sys connection ss-server-addr <nodeIP> - This will force-drain any existing connections from the node (something to do after you force offline and there are persistent connections preventing maintenance)
Enable Node: >tmsh modify /ltm node <nodename> state user-up session user-enabled - This will return the node to accepting traffic from any disabled state.
After changing a configuration you'll want to tmsh save /sys config.
If you want to manage these attributes remotely, you can use the iControlREST API via curl or if you want, there's a python SDK available to make use of REST commands within your py scripts.
Curl example: >curl -sk -u XXXXX:XXXX https://bigp_ip_addr/mgmt/tm/ltm/node/~Common~NODE/ -H "Content-Type: application/json" -X PUT -d '{"state": "user-down", "session": "user-disabled"}'
Here are the available BIG-IP TMSH commands you can use within your script (DevCentral login required) and here is how to use the BIG-IP iControlREST API. I use this one myself so I can run simple scripts remotely to manage common objects. Here are the BIG-IP iControlREST commands specific to node management (again, DevCentral login required).
Hope this gets you where you need to be.

Related

how can I add sampler after sampler in jmeter

I am using jmeter to test at my dev server.
The Scenario is like This.
0. Turn OFF all firewalls both local PC(so called, HOST) and client PC(so called, CLIENT)
Turn on Jmeter at my HOST
--> add Thread Group, bzm-Parellel Controller. I am not certain at this point
Connect to CLIENT (once)
-> maybe, by SSH Command or REMOTE Start
execute my test script at CLIENT (several times, more than 100 times)
-> such as, 'ls' 'pwd' 'mkdir dir123' 'ls' IN A ROW!!
-> maybe, by OS Process Sampler. I am not certain at this point
get result of (3) at my HOST jmeter by View Results Tree
This is the scenario that I Thought
Can anyone help me with this issue.
Cuz, there's too many Samplers and less information, I'm suffering such a tough moments.
Thank you for reading.
Turning off firewall is not the best idea, just open the port you will be using, default port for SSH is 22 and normally it's getting opened more or less automatically when you install OpenSSH server
I don't think you need Parallel Controller, it has very specific use cases like simulating AJAX requests, it will be sufficient to specify the desired number of users/loops/test duration on Thread Group level
Remote start is for JMeter distributed testing, if you want to run a shell command on the client use OS Process Sampler or SSH Command sampler, see How to Run External Commands and Programs Locally and Remotely from JMeter article for more details.
The same is for point 2, if you need to create a directory depending on your HOST and CLIENT operating systems you need to choose one of the aforementioned samplers. Just be aware that only first operation/iteration succeeds, all the subsequent attempts you will get cannot create directory ‘dir123’: File exists. I'm also not certain what you're trying to test here? SSH server performance? Operating system performance? Network performance?
If you add View Results Tree listener to your Test Plan and run your test in GUI mode it will automatically catch all the Samplers results

Running bash script on GCP VM instance programmatically

I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.

How do I prevent access to a mounted secret file?

I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.

slurm - action_unknown in pam_slurm_adopt

What does "source job" refer to in the description of action_unknown?
action_unknown
The action to perform when the user has multiple jobs on the node
and the RPC does not locate the **source job**. If the RPC mechanism works
properly in your environment, this option will likely be relevant only
when connecting from a login node. Configurable values are:
newest (default)
Pick the newest job on the node. The "newest" job is chosen based
on the mtime of the job's step_extern cgroup; asking Slurm would
require an RPC to the controller. Thus, the memory cgroup must be in
use so that the code can check mtimes of cgroup directories. The user
can ssh in but may be adopted into a job that exits earlier than the
job they intended to check on. The ssh connection will at least be
subject to appropriate limits and the user can be informed of better
ways to accomplish their objectives if this becomes a problem.
allow
Let the connection through without adoption.
deny
Deny the connection.
https://slurm.schedmd.com/pam_slurm_adopt.html
slurm_pam_adopt will try to capture an incoming SSH session into the cgroup corresponding to the job currently running on the host. This option is meant to decide what to do when there are several jobs running for the user that initiates the ssh command.
The 'source job' is the jobid of the process that initiates the ssh call. Typically, if you use an interactive ssh session from the frontend, there is not 'source job', but if the ssh command is run from within a submission script, then the 'source job' is the one corresponding to that submission script.

HipChat Server login screen limit

Is it possible to restrict access to the HipChat Server login screen for some networks for security reason?
I need to limit only to site root.
Unfortunately, there's not feature right now to allow you to do that directly.
One way you could work around it is to write an script that updates the ngixn configuration to add IP filtering. This question proposes a method to achieve something similar to what you describe (you would need to customize the script to fit into HipChat Server's nginx configuration though):
cat /var/www-allow/client1-allow.conf
allow 192.168.1.1;
allow 10.0.0.1;
cat /etc/nginx/sites/client1.conf
...
server {
include /var/www-allow/client1-allow.conf;
deny all;
}
Try the script manually. Once it works, move the script to /home/admin/startup_scripts/ipfilter (keep the file without extension, and make it executable), so that your configuration stays after reboot and upgrade (/home/admin/startup_scripts contains a few examples of different scripts).

Resources