I have two VM in an OpenStack cloud. Using the following command, I can send data between them:
# On the server (IP 10.0.0.7)
nc -u -l -p 7865
# On the client (10.0.0.10)
nc -u 10.0.0.7 7865
Now, I would like to block the communication from 10.0.0.10 to 10.0.0.7 (but still allow it in the other direction). So I create this flow:
root#ubuntu:/opt/stack/opendaylight# cat my_custom_flow.xml
<?xml version="1.0"?>
<flow xmlns="urn:opendaylight:flow:inventory">
<priority>1</priority>
<flow-name>nakrule-custom-flow</flow-name>
<idle-timeout>12000</idle-timeout>
<match>
<ethernet-match>
<ethernet-type>
<type>2048</type>
</ethernet-type>
</ethernet-match>
<ipv4-source>10.0.0.10/32</ipv4-source>
<ipv4-destination>10.0.0.7/32</ipv4-destination>
<ip-match>
<ip-dscp>28</ip-dscp>
</ip-match>
</match>
<id>10</id>
<table_id>0</table_id>
<instructions>
<instruction>
<order>6555</order>
</instruction>
<instruction>
<order>0</order>
<apply-actions>
<action>
<order>0</order>
<drop-action/>
</action>
</apply-actions>
</instruction>
</instructions>
</flow>
Then, I send the flow to my switch. I use OpenDaylight as my SDN controller to manage my OpenStack cloud. I have two switchs, br-int and br-ex. A port for each VM in OpenStack is created on br-int. I can get the switchs ID with the following command:
curl -u admin:admin http://192.168.100.100:8181/restconf/config/opendaylight-inventory:nodes | python -m json.tool | grep '"id": "openflow:'[0-9]*'"'
"id": "openflow:2025202531975591"
"id": "openflow:202520253197559"
The switch with the ID 202520253197559 has a lot of flows in his table, while the other has like 2-3. So I guess 202520253197559 is br-int and therefore add my new flow to it with the following command:
curl -u admin:admin -H 'Content-Type: application/yang.data+xml' -X PUT -d #my_custom_flow.xml http://192.168.100.100:8181/restconf/config/opendaylight-inventory:nodes/node/openflow:202520253197559/table/234/flow/10
Now, I can see my flow with another REST request:
curl -u admin:admin http://192.168.100.100:8181/restconf/config/opendaylight-inventory:nodes | python -m json.tool
{
"flow-name": "nakrule-custom-flow",
"id": "10",
"idle-timeout": 12000,
"instructions": {
"instruction": [
{
"order": 6555
},
{
"apply-actions": {
"action": [
{
"drop-action": {},
"order": 0
}
]
},
"order": 0
}
]
},
"match": {
"ethernet-match": {
"ethernet-type": {
"type": 2048
}
},
"ip-match": {
"ip-dscp": 28
},
"ipv4-destination": "10.0.0.7/32",
"ipv4-source": "10.0.0.10/32"
},
"priority": 1,
"table_id": 0
},
However, then I go back to my two VM, they still can send data successfully to each other. Moreover, using the following command return nothing:
ovs-ofctl dump-flows br-int --protocols=OpenFlow13 | grep nakrule
I should see my new flow, does that mean OpenDaylight does not add it to my switch ?
root#ubuntu:/opt/stack# ovs-ofctl snoop br-int
2018-05-11T09:15:27Z|00001|vconn|ERR|unix:/var/run/openvswitch/br-int.snoop: received OpenFlow version 0x04 != expected 01
2018-05-11T09:15:27Z|00002|vconn|ERR|unix:/var/run/openvswitch/br-int.snoop: received OpenFlow version 0x04 != expected 01
Thank you in advance.
are you sure openflow:1 is the node id of the switch (br-int) that
you want to program. I am doubting that. Usually openflow:1 is something
we see from a mininet deployment.
do a GET on the topology API via RESTCONF and figure out the node id
of your switch(es). Or you can probably guess it by finding the mac
address of the br-int you are using and converting the HEX to decimal.
For example, mininet actually makes their mac addresses simple, like
00:00:00:00:00:01, so that's why it ends up openflow:1
another problem I notice in your updated question is that you are sending
the flow for table 234 in the URL, but specifying table 0 in the flow
data.
Also, you can check the config/ store in restconf for those nodes to
see if ODL is even accepting the flow. If it's in the config store and
that switch is connected to the openflow plugin, then the flow should
be pushed down to the switch.
another place to look for clues, is the karaf.log.
finally, if you think everything is right and the flow should be getting
sent down to the switch, but the switch is not showing the flow, then
try doing a packet capture. It's possible that your switch is rejecting
the flow for some reason. That might also be shown in the ovs logs, if
that's the case. I doubt this is what the problem is, but just adding it
in case.
Related
I'm trying to use the Cloudflare API to dynamically update a single specific firewall rule.
I'm using a bash script to:
Grab the latest IP addresses within my Cloudflare firewall rule.
Pass in a new IP address to be added to the firewall using the $1 variable.
Use the Filters API to update the firewall rule with the new IP address.
Here's the full bash script I'm using to try and achieve this. (I might not be doing things in the most efficient way, but I'm new to bash overall)
#!/bin/bash
# Cloudflare Email
EMAIL='someone#example.com'
# API Key
TOKEN='Token'
# Zone ID
ZONE='Zone'
# Firewall ID
ID='ID'
# Rule Filter
FILTER='Filter'
# Grab Cloudflare firewall rule we want to update:
RULE=$(
curl -X GET "https://api.cloudflare.com/client/v4/zones/$ZONE/firewall/rules/$ID?id=$ID" \
-H "X-Auth-Email: $EMAIL" \
-H "X-Auth-Key: $TOKEN" \
-H "Content-Type: application/json"
)
# Filter the response to just show IPv4 and IPv6 addresses:
OLD=$(
jq -r '.result.filter.expression | capture(".*{(?<ips>[^}]*)}").ips' <<<"$RULE"
)
# Debug
echo $OLD
# Use the filters API to update the expression
curl -X PUT \
-H "X-Auth-Email: $EMAIL" \
-H "X-Auth-Key: $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"id": "ID",
"paused": false,
"expression": "(ip.src in {'$OLD' '$1'})"
}' "https://api.cloudflare.com/client/v4/zones/$ZONE/filters/$FILTER"
Running this script when there are two IP address in the firewall rule shown here works perfectly.
The response is also all good:
{
"result": {
"id": "ID",
"paused": false,
"expression": "(ip.src in {192.168.1.1 192.168.1.2})"
},
"success": true,
"errors": [],
"messages": []
}
But, when I run the script a third time, with a different IP address I get this curl error.
$ bash test.sh 192.168.1.3
192.168.1.1 192.168.1.2 <--- Just Debug
curl: (3) unmatched close brace/bracket in URL position 24:
192.168.1.2 192.168.1.3})"
}
^
I don't understand why it works for two IP's but three it doesn't. Can anyone shed some light on this?
Thank you so much, let me know if anyone needs additional information!
I want to send a json request and embedd a variable in the post data.
I did a little research and I came up with the single quotes around the variable.
#!/bin/bash
FILENAME="/media/file.avi"
curl -i -X POST -H "Content-Type: application/json" —d '{"jsonrpc": "2.0", "method": "Player.Open", "params":{"item":{"file":"'$FILENAME'"}}}' http://192.167.0.13/jsonrpc
Unfortunately I get some errors:
curl: (6) Couldn't resolve host '—d'
curl: (3) [globbing] nested braces not supported at pos 54
HTTP/1.1 200 OK
Content-Length: 76
Content-Type: application/json
Date: Wed, 29 Jan 2014 19:16:56 GMT
{"error":{"code":-32700,"message":"Parse error."},"id":null,"jsonrpc":"2.0"}
Appearently there are some problems with the braces and the http answer states, that the command could not be executed. What's wrong with my code here?
Thanks!
This is my curl version:
curl 7.30.0 (mips-unknown-linux-gnu) libcurl/7.30.0 OpenSSL/0.9.8y
Protocols: file ftp ftps http https imap imaps pop3 pop3s rtsp smtp smtps tftp
Features: IPv6 Largefile NTLM NTLM_WB SSL
Update: use the simpler
request_body=$(cat <<EOF
{
"jsonrpc": "2.0",
"method": "Player.Open",
"params": {
"item": {
"file": "$FILENAME"
}
}
}
EOF
)
rather than what I explain below. However, if it is an option, use jq to generate the JSON instead. This ensures that the value of $FILENAME is properly quoted.
request_body=$(jq -n --arg fname "$FILENAME" '
{
jsonrpc: "2.0",
method: "Player.Open",
params: {item: {file: $fname}}
}'
It would be simpler to define a variable with the contents of the request body first:
#!/bin/bash
header="Content-Type: application/json"
FILENAME="/media/file.avi"
request_body=$(< <(cat <<EOF
{
"jsonrpc": "2.0",
"method": "Player.Open",
"params": {
"item": {
"file": "$FILENAME"
}
}
}
EOF
))
curl -i -X POST -H "$header" -d "$request_body" http://192.167.0.13/jsonrpc
This definition might require an explanation to understand, but note two big benefits:
You eliminate a level of quoting
You can easily format the text for readability.
First, you have a simple command substitution that reads from a file:
$( < ... ) # bash improvement over $( cat ... )
Instead of a file name, though, you specify a process substitution, in which the output of a command is used as if it were the body of a file.
The command in the process substitution is simply cat, which reads from a here document. It is the here document that contains your request body.
My suggestion:
#!/bin/bash
FILENAME="/media/file 2.avi"
curl -i -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "method": "Player.Open", "params":{"item":{"file":"'"$FILENAME"'"}}}' http://192.167.0.13/jsonrpc
The differences are hyphen in -d (instead of a dash) and double quotes around $FILENAME.
Here is another way to insert data from a file into a JSON property.
This solution is based on a really cool command called jq.
Below is an example which prepares request JSON data, used to create a CoreOS droplet on Digital Ocean:
# Load the cloud config to variable
user_data=$(cat config/cloud-config)
# Prepare the request data
request_data='{
"name": "server name",
"region": "fra1",
"size": "512mb",
"image": "coreos-stable",
"backups": false,
"ipv6": true,
"user_data": "---this content will be replaced---",
"ssh_keys": [1234, 2345]
}'
# Insert data from file into the user_data property
request_data=$(echo $request_data | jq ". + {user_data: \"$user_data\"}")
I recently upgraded my ElasticStack instance from 5.5 to 6.0, and it seems that some of the breaking changes of this version have harmed my pipeline. I had a script that, depending on the indices inside ElasticSearch, created index-patterns automatically for some groups of similar indices. The problem is that with the new mapping changes of the 6.0 version, I cannot add any new index-pattern from the console. This was the request I used and worked fine in 5.5:
curl -XPOST "http://localhost:9200/.kibana/index-pattern" -H 'Content- Type: application/json' -d'
{
"title" : "index_name",
"timeFieldName" : "execution_time"
}'
This is the response I get now, in 6.0, from ElasticSearch:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [.kibana] as the final mapping would have more than 1 type: [index-pattern, doc]"
}
],
"type": "illegal_argument_exception",
"reason": "Rejecting mapping update to [.kibana] as the final mapping would have more than 1 type: [index-pattern, doc]"
},
"status": 400
}
How could I add index-patterns from the console avoiding this multiple mapping issue?
The URL has been changed in version 6.0.0, here is the new URL:
http://localhost:9200/.kibana/doc/doc:index-pattern:my-index-pattern-name
This CURL should work for you:
curl -XPOST "http://localhost:9200/.kibana/doc/index-pattern:my-index-pattern-name" -H 'Content-Type: application/json' -d'
{
"type" : "index-pattern",
"index-pattern" : {
"title": "my-index-pattern-name*",
"timeFieldName": "execution_time"
}
}'
If you are Kibana 7.0.1 / 7+ then you can refer saved_objects API ex:
Refer: https://www.elastic.co/guide/en/kibana/master/saved-objects-api.html (Look for Get, Create, Delete etc).
In this case, we'll use: https://www.elastic.co/guide/en/kibana/master/saved-objects-api-create.html
$ curl -X POST -u $user:$pass -H "Content-Type: application/json" -H "kbn-xsrf:true" "${KIBANA_URL}/api/saved_objects/index-pattern/dummy_index_pattern" -d '{ "attributes": { "title":"index_name*", "timeFieldName":"sprint_start_date"}}' -w "\n" | jq
and
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 327 100 250 100 77 543 167 --:--:-- --:--:-- --:--:-- 543
{
"type": "index-pattern",
"id": "dummy_index_pattern",
"attributes": {
"title": "index_name*",
"timeFieldName": "sprint_start_date"
},
"references": [],
"migrationVersion": {
"index-pattern": "6.5.0"
},
"updated_at": "2020-02-25T22:56:44.531Z",
"version": "Wzg5NCwxNV0="
}
Where $KIBANA_URL was set to: http://my-elk-stack.devops.local:5601
If you don't have jq installed, remove | jq from the command (as listed above).
PS: When KIBANA's GUI is used to create an index-pattern, Kibana stores its i.e. index ID as an alpha-numeric value (ex: laskl32ukdflsdjflskadf-sdf-sdfsaldkjfhsdf-dsfasdf) which is hard to use/find/type when doing GET operation to find info about an existing index-pattern using the following curl command.
If you passed index pattern name (like we did above), then in Kibana/Elasticsearch, it'll story the Index-Pattern's ID by the name you gave to the REST call (ex: .../api/saved_objects/index-pattern/dummy_index_pattern")
here: dummy_index_pattern will become the ID (only visible if you hover over your mouse on the index-pattern name in Kibana GUI) and
it'll have it's index name as: index_name* (i.e. what's listed in GUI when you click on Kibana Home > Gear icon > Index Patterns and see the index patterns listed on the right side.
NOTE: The timeFieldName is very important. This is the field, which is used for looking for time-series events (i.e. especially TSVB Time Series Visual Builder Visualization type). By default, it uses #timestamp field, but if you recreate your index (instead of sending delta information to your target Elasticsearch index from a data source (ex: JIRA)) every time and send all data in one shot from scratch from a data source, then #timestamp won't help with Visualization's Time-Spanning/Window feature (where you change time from last 1 week to last 1 hour or last 6 months); in that case, you can set a different field i.e. sprint_start_date like I used (and now in Kibana Discover data page, if you select this index-pattern, it'll use sprint_start_date (type: date) field, for events.
To GET index pattern info about the newly created index-pattern, you can refer: https://www.elastic.co/guide/en/kibana/master/saved-objects-api-get.html --OR run the following where (the last value in the URL path is the ID value of the index pattern we created earlier:
curl -X GET "${KIBANA_URL}/api/saved_objects/index-pattern/dummy_index_pattern" | jq
or
otherwise (if you want to perform a GET on an index pattern which is created via Kibana's GUI/webpage under Page Index Pattern > Create Index Pattern, you'd have to enter something like this:
curl -X GET "${KIBANA_URL}/api/saved_objects/index-pattern/jqlaskl32ukdflsdjflskadf-sdf-sdfsaldkjfhsdf-dsfasdf" | jq
For Kibana 7.7.0 with Open Distro security plugin (amazon/opendistro-for-elasticsearch-kibana:1.8.0 Docker image to be precise), this worked for me:
curl -X POST \
-u USERNAME:PASSWORD \
KIBANA_HOST/api/saved_objects/index-pattern \
-H "kbn-version: 7.7.0" \
-H "kbn-xsrf: true" \
-H "content-type: application/json; charset=utf-8" \
-d '{"attributes":{"title":"INDEX-PATTERN*","timeFieldName":"#timestamp","fields":"[]"}}'
Please note, that kbn-xsrf header is required, but it seems like it's useless as from security point of view.
Output was like:
{"type":"index-pattern","id":"UUID","attributes":{"title":"INDEX-PATTERN*","timeFieldName":"#timestamp","fields":"[]"},"references":[],"migrationVersion":{"index-pattern":"7.6.0"},"updated_at":"TIMESTAMP","version":"VERSION"}
I can't tell why migrationVersion.index-pattern is "7.6.0".
For other Kibana versions you should be able to:
Open Kibana UI in browser
Open Developers console, navigate to Network tab
Create index pattern using UI
Open POST request in the Developers console, take a look on URL and headers, than rewrite it to cURL
Indices created in Elasticsearch 6.0.0 or later may only contain a single mapping type.
Indices created in 5.x with multiple mapping types will continue to function as before in Elasticsearch 6.x.
Mapping types will be completely removed in Elasticsearch 7.0.0.
Maybe you are creating a index with more than one doc_types in ES 6.0.0.
https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html
Create index-pattern in bulk with timestamp:
cat index_svc.txt
my-index1
my-index2
my-index3
my-index4
my-index5
my-index6
cat index_svc.txt | while read index; do
echo -ne "create index-pattern ${index} \t"
curl -XPOST "http://10.0.1.44:9200/.kibana/doc/index-pattern:${index}" -H 'Content-Type: application/json' -d "{\"type\":\"index-pattern\",\"index-pattern\":{\"title\":\"${index}2020*\",\"timeFieldName\":\"#timestamp\"}}"
echo
done
I have a question related to drmma and the cluster config file in snakemake.
Currently i have a pipeline and I submit jobs to the cluster using drmma with the following command:
snakemake --drmaa " -q short.q -pe smp 8 -l membycore=4G" --jobs 100 -p file1/out file2/out file3/out
The problem is that some of the rules/jobs require less or more resources. I though that if i used the json cluster file I would be able to submit the jobs with different resources. My json file looks like this:
{
"__default__":
{
"-q":"short.q",
"-pe":"smp 1",
"-l":"membycore=4G"
},
"job1":
{
"-q":"short.q",
"-pe":"smp 8",
"-l":"membycore=4G"
},
"job2":
{
"-q":"short.q",
"-pe":"smp 8",
"-l":"membycore=4G"
}
}
When I run the following command my jobs (job1 and job2) are submitted with default options and not with the custom ones:
snakemake --jobs 100 --cluster-config cluster.json --drmaa -p file1/out file2/out file3/out
What am I doing wrong? Is it that I cannot combine the drmaa option with the cluster-config file?
the cluster config file simply allows you do define variables that are later used in --cluster/--cluster-sync/--drmaa depending on the defined placeholders. There's no DRMAA specific magic involved here. Have a look at the corresponding section in the documentation again.
Maybe an example makes things clearer:
Cluster config:
{
"__default__":
{
"time" : "02:00:00",
"mem" : 1G,
},
# more rule specific definitions here...
}
Example snakemake arguments to make use of the above:
--drmaa " -pe OpenMP {threads} -l mem_free={cluster.mem} -l h_rt={cluster.time}"
or
--cluster-sync "qsub -sync y -pe OpenMP {threads} -l mem_free={cluster.mem} -l h_rt={cluster.time}"
cluster.time and cluster.mem will be replaced accordingly per rule.
Andreas
I have a Heroku Postgres database and I'd like to rotate the credentials from REST. It would be trivial on the command line (source):
heroku pg:credentials DATABASE --reset
How do you do the same thing in pure REST? The closest I could find was to obtain the details of the add-on:
# The ID was obtained from https://api.heroku.com/apps/myexampleapp/addons
curl -n -X GET https://api.heroku.com/apps/myexampleapp/addons/5ebb9a9e-b340-4a62-afb5-de0c99fd0fad \
-H "Accept: application/vnd.heroku+json; version=3"
{
"config_vars":[
"HEROKU_POSTGRESQL_AMBER_URL"
],
"created_at":"2015-02-02T11:09:33Z",
"id":"5ebb9a9e-b340-4a62-afb5-de0c99fd0fad",
"name":"heroku-postgresql-amber",
"addon_service":{
"id":"6c67493d-8fc2-4cd4-9161-4f1ec11cbe69",
"name":"Heroku Postgres"
},
"plan":{
"id":"062a1cc7-f79f-404c-9f91-135f70175577",
"name":"heroku-postgresql:hobby-dev"
},
"app":{
"id":"cccccccc-ffff-4444-bbbb-dddddddddddd",
"name":"myexampleapp"
},
"provider_id":"resource8637857#heroku.com",
"updated_at":"2015-02-02T11:09:33Z"
}
But I don't succeed to reset/rotate the credentials using a POST https://api.heroku.com/apps/myexampleapp/addons/5ebb9a9e-b340-4a62-afb5-de0c99fd0fad/reset or similar. What is the right command?