RabbitMQ bind exchange to exchange in bash - bash

I would like to bind an exchange to another exchange via a bash script (what I plan to use within a Dockerfile).
As simple code (like JS) it is fine, working perfectly, but I would like to use plain bash script for it, if it is possible.
The part of the JS code, what I would like to have but in bash:
// ...
await ch1.assertExchange('test-exchange', 'headers');
await ch1.assertExchange('another-exchange', 'headers');
await ch1.bindExchange('test-exchange', 'another-exchange','',{
'x-match':'all',
target: 'pay-flow'
});
// ...
When I run the JS code, then it is fine. I got the following results in RabbitMQ:
bash-5.1# rabbitmqadmin -u guest -p guest list bindings
+-----------------+-----------------------+-----------------------+
| source | destination | routing_key |
+-----------------+-----------------------+-----------------------+
| | test-queue | test-queue |
| test-exchange | test-queue | |
| test-exchange | another-exchange | |
+-----------------+-----------------------+-----------------------+
What I tried in bash:
#!/bin/bash
rabbitmqadmin -u guest -p guest declare binding source=test-exchange destination=another-exchange
Then I got the message of:
** Not found: /api/bindings/%2F/e/test-exchange/q/another-exchange
By the CLI/rabbitmqadmin documentation it seems, I am supposed to (or able only to) bind an exchange with a queue.
Anyone has any idea, how to solve it? (Maybe write the binder code in python and run it from the bash script?) Are there any kind of cli tool what capable to do it?

Please see the command's help:
$ rabbitmqadmin help subcommands | grep -F 'declare binding'
declare binding source=... destination=... [destination_type=... routing_key=... arguments=...]
This is the correct set of arguments:
rabbitmqadmin -u guest -p guest declare binding source=test-exchange destination=another-exchange destination_type=exchange
Of course, before you run the above command the two exchanges must exist.
Tested as follows:
$ rabbitmqadmin declare exchange name=test-exchange type=direct
exchange declared
$ rabbitmqadmin declare exchange name=test-exchange-2 type=direct
exchange declared
$ rabbitmqadmin declare binding source=test-exchange destination=test-exchange-2 destination_type=exchange
binding declared
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Related

Parse string in sh file

For a gitlab ci/cd project, I need to find the url of a knative service (used to deploy a webservice) so that I can utilize it as my base url for load testing
I have found that I can find the url (and other information) with the command: kubectl get ksvc helloworld-go, which outputs:
NAME URL LATESTCREATED LATESTREADY READY REASON
helloworld-go http://helloworld-go.default.34.83.80.117.xip.io helloworld-go-96dtk helloworld-go-96dtk True
Can someone please provide me an easy way to extract only the url in a sh script? I believe the easiest way might be to find the text between the first and second space on the second line.
kubectl get ksvc helloworld-go | grep -oP "http://[^\t]*"
or
kubectl get ksvc helloworld-go | grep -Eo "http://[^[:space:]]*"

Disable scheduling on second instance of same project on AWS

I have 2 instances of the same deployment/project on AWS Elastic Beanstalk.
Both contain a Laravel project which contains scheduling code which runs various commands which can be found in the schedule method/function of the Kernel.php class within 'app/Console' - the problem I have is that if a command runs from one instance then it will also run the command from the second instance which is not what I want to happen.
What I would like to happen is that the commands get run from only one instance and not the other. How do I achieve this in the easiest way possible?
Is there a Laravel package which could help me achieve this?
From Laravel 5.6:
Laravel provides a onOneServer method which you can use if your applications share a single cache server. You could use something like ElastiCache to host Redis or Memcached and use it as your cache server for both of your application instances. Then you would be able to use the onOneServer method like this:
$schedule->command('report:generate')
->fridays()
->at('17:00')
->onOneServer();
For older versions of Laravel:
You could use the jdavidbakr/multi-server-event package. Once you have it set up you should be able to use it like:
$schedule->command('inspire')
->daily()
->withoutOverlappingMultiServer();
I had the same issue to run some cronjobs (nothing related to Laravel) and I found a nice solution (don't remember where I found it)
What I do is check if the instance running the code is the first instance on the Auto Scaling Group, if it's the first then I execute the command otherwise just exit.
This is the way it's implemented:
#!/bin/bash
INSTANCE_ID=`curl http://169.254.169.254/latest/meta-data/instance-id 2>/dev/null`
REGION=`curl -s http://169.254.169.254/latest/dynamic/instance-identity/document 2>/dev/null | jq -r .region`
# Find the Auto Scaling Group name from the Elastic Beanstalk environment
ASG=`aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" \
--region $REGION --output json | jq -r '.[][] | select(.Key=="aws:autoscaling:groupName") | .Value'`
# Find the first instance in the Auto Scaling Group
FIRST=`aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG \
--region $REGION --output json | \
jq -r '.AutoScalingGroups[].Instances[] | select(.LifecycleState=="InService") | .InstanceId' | sort | head -1`
# If the instance ids are the same exit 0
[ "$FIRST" = "$INSTANCE_ID" ]
Try implementing those calls using PHP and it should work.

Passing a parameter's value to shell function prints only the name of the parameter

I need to pass a parameter to my shell function, which looks like this:
function deploy {
docker create \
--name=$1_temp \
-e test_postgres_database=$2 \
-e test_publicAddress="http://${3}:9696"\
# other irrelevant stuff
I am passing the following parameters:
deploy test_container test_name #1 test_database #2 ip_address #3
So when, I pass those 3 parameters, based on them a new container is created. However, the third parameter is something special. So there is another function, which gets the ip of the container.
function get_container_ip_address {
container_id=($(docker ps --format "{{.ID}} {{.Names}}" | grep $1))
echo $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${container_id[0]})
So, the execution of the deploy function actually looks like this:
ip_address=$(get_container_ip_address test_container)
deploy test_container test_database ip_address
Let's say the IP address of the container is 1.1.1.1, so the ip_address=1.1.1.1.
However, when I execute the script and create the container, its IP address is:
"http://ip_address:9696" and not "http://1.1.1.1:9696".
I also tried the following:
...
-e test_publicAddress="http://$3:9696"\
...
But I still got the same result. Is there a way I can get the value of the passed parameter? By the way, I am sure it contains the needed ip address as I use it elsewhere (not in a function) and I printed it for testing. Thank you in advance!
so run this like that:
ip_address=$(get_container_ip_address test_container)
deploy test_container test_database $ip_address
when you call it without the $ the script leave it alone like a string ip_address

snmpd.conf clientaddr not working for sending trap /inform with given IP source address

Given the following sample/simple snmpd.conf (Net-SNMP 5.7.2 on RHEL 7.4)
rwcommunity private 192.168.56.101
trapsess -Ci --clientaddr=192.168.56.128 -v 2c -c private 192.168.56.101:162
when starting a SNMP Daemon
snmpd -f -Lo -D -C -c data/snmpd_test.conf udp:192.168.56.128:161
We obtain ''Start Up'' InformRequest with IP source 192.56.168.1 instead of ...128 (WireShark snapshot below)
It is not surprising as the -D option allows us to output the debug information saying that
trace: netsnmp_config_process_memory_list(): read_config.c, 696:
read_config:mem: processing memory: clientaddr 192.168.56.128
trace: run_config_handler(): read_config.c, 562:
9:read_config:parser: clientaddr handler not registered for this time
Web sources however say:
snmp.conf
...This value is also used by snmpd when generating notifications.
snmpd.conf
trapsess [SNMPCMD_ARGS] HOST
provides a more generic mechanism for defining notification destinations.
SNMPCMD_ARGS should be the command-line options required for an equivalent
snmptrap (or snmpinform) command to send the desired notification
I read also some old threads like this one
However this option is working well with snmptrap
snmptrap -D -Lo -Ci --clientaddr=192.168.56.128 -M+path_to_my_mibs -v 2c -c private 192.168.56.101:162 "" .1.3.6.1.4.1.a.b.c.d.e.f.0 i 0
This option is also working when placed in snmp.conf ( mind there is no 'd' here ) and then it applies to snmpset and snmpget (and maybe other)
So my question is: Is it a documentation error, a bug, a misuse of the Net-SNMP stack ?
After a long struggle I may have an answer and I write a short note as I just found a trick
It seems that clientaddr is not parsed correctly wherever in the snmpd.conf
(I tried not also inside the trapsess line)
But it seems to be a valid option in the command line of snmpd
like it was a valid option in the snmptrap command line. So I assumed it could be the same parsing mechanism for both.
a condition also is that the IP addres must be valid one
which means that
snmpd -f -Lo -D -C -c data/snmpd_test.conf --clientaddr=192.168.56.128 udp:192.168.56.128:161
seems to fully solve my problem.
I will perform more tests and if accurate format this answer a little bit better but it seems a good hint.

Convert SNMP traps from v1 to v3

I'm trying to convert snmp v1 traps to v3. I've followed this discussion but it's vague.
I've also looked here but without success.
To be more clear: I have a Centos 6 station, with net-snmp 5.5 on it. I need to generate v1 traps, receive them, convert them to v3, then forward them.
Regarding the first guide, this is what I managed so far:
Master:
snmpd -Lo --master=agentx --agentXSocket=tcp:192.168.58.64:42000 udp:1161
Listen:
snmpwalk -v3 -u snmpv3user -A snmpv3pass -a MD5 -l authnoPriv 192.168.58.64:1161
Later edit:
I have made some progress, I was able to run snmpd as master, connect snmptrapd as agent to it, then have v1 traps mechanism functional.
I did the following:
In order to get snmptrapd connected as a subagent to snmpd you need to do the following:
###1 EDIT /etc/hosts.allow and add
snmpd: $(your_ip)
smptrapd: $(your_ip)
this is important because snmptrapd fails silently if rejected
by tcp wrap.
###2 EDIT /etc/snmp/snmpd.conf and add at the bottom of the other
com2sec directives.
com2sec infwnet $(your_ip) YOUR-COMMUNITY
add these lines
group MyROGroup v1 infwnet
group MyROGroup v2c infwnet
group MyROGroup usm infwnet
under
"# Second, map the security names into group names:"
add this view at the bottom of the other views
view all included .1 80
add this group acces at the bottom of other group access directives
access MyROGroup "" any noauth exact all none none
add this line as well:
master agentx
###3 TEST it with this:
snmpwalk -v1 -c YOUR_COMMUNITY $(your_ip) .
###4 CREATE THE FOLLOWING TRAP TEST EXAMPLE:
touch /usr/share/snmp/mibs/UCD-TRAP-TEST-MIB.txt
###5 COPY PASTE THE TEXT BELOW INTO IT:
UCD-TRAP-TEST-MIB DEFINITIONS ::= BEGIN
IMPORTS ucdExperimental FROM UCD-SNMP-MIB;
demotraps OBJECT IDENTIFIER ::= { ucdExperimental 990 }
demoTrap TRAP-TYPE
ENTERPRISE demotraps
VARIABLES { sysLocation }
DESCRIPTION "An example of an SMIv1 trap"
::= 17
END
###6 EDIT /etc/sysconfig/snmptrapd (not /etc/default/snmptrapd !!)
replace OPTIONS with this:
OPTIONS="-Lsd -m ALL -M /usr/share/snmp/mibs -p /var/run/snmptrapd.pid"
###7 TEST IT WITH
snmptrap -v 1 -c public $(your_ip) UCD-TRAP-TEST-MIB::demotraps "" 6 17 "" SNMPv2-MIB::sysLocation.0 s "Just here"
Now I just need to find a way to convert them to v3 and read/receive them from a remote snmpd

Resources