GNU Radio ZMQ Blocks REP - REQ - zeromq

I am trying to connect GNU Radio to a python script using the GR ZMQ REP / REQ blocks. GR is running on a Raspberry Pi 4 on router address 192.168.1.25. The python script is on a separate computer, from which I can successfully ping 192.168.1.25. I am setting up the REQ-REP pairs on separate ports, 55555 and 55556.
Flow graph:
import pmt
import zmq
# create a REQ socket
req_address = 'tcp://192.168.1.25:55555'
req_context = zmq.Context()
req_sock = req_context.socket (zmq.REQ)
rc = req_sock.connect (req_address)
# create a REP socket
rep_address = 'tcp://192.168.1.25:55556'
rep_context = zmq.Context()
rep_sock = rep_context.socket (zmq.REP)
rc = rep_sock.connect (rep_address)
while True:
data = req_sock.recv()
print(data)
rep_sock.send (b'1')
Running this code leads to the following error:
ZMQError: Operation cannot be accomplished in current state
The error is flagged at this line:
data = req_sock.recv()
Can you comment on the cause of the error? I know there is a strict REQ-REP, REQ-REP.. relationship, but I cannot find my error.

Your current code has two problems:
You call req_socket.recv(), but then you call rep_sock.send(): that's not how a REQ/REP pair works. You only need to create one socket (the REQ socket); it connects to a remote REP socket.
When you create a REQ socket, you need to send a REQuest before you receive a REPly.
Additionally, you should only create a single ZMQ context, even if you have multiple sockets.
A functional version of your code might look like this:
import zmq
# create a REQ socket
ctx = zmq.Context()
req_sock = ctx.socket (zmq.REQ)
# connect to a remote REP sink
rep_address = 'tcp://192.168.1.25:55555'
rc = req_sock.connect(rep_address)
while True:
req_sock.send (b'1')
data = req_sock.recv()
print(data)
I tested the above code against the following GNU Radio config:
options:
parameters:
author: ''
catch_exceptions: 'True'
category: '[GRC Hier Blocks]'
cmake_opt: ''
comment: ''
copyright: ''
description: ''
gen_cmake: 'On'
gen_linking: dynamic
generate_options: qt_gui
hier_block_src_path: '.:'
id: example
max_nouts: '0'
output_language: python
placement: (0,0)
qt_qss_theme: ''
realtime_scheduling: ''
run: 'True'
run_command: '{python} -u {filename}'
run_options: prompt
sizing_mode: fixed
thread_safe_setters: ''
title: Example
window_size: (1000,1000)
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [8, 8]
rotation: 0
state: enabled
blocks:
- name: samp_rate
id: variable
parameters:
comment: ''
value: '32000'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [184, 12]
rotation: 0
state: enabled
- name: analog_sig_source_x_0
id: analog_sig_source_x
parameters:
affinity: ''
alias: ''
amp: '1'
comment: ''
freq: '1000'
maxoutbuf: '0'
minoutbuf: '0'
offset: '0'
phase: '0'
samp_rate: samp_rate
type: complex
waveform: analog.GR_COS_WAVE
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [184, 292.0]
rotation: 0
state: true
- name: blocks_throttle_0
id: blocks_throttle
parameters:
affinity: ''
alias: ''
comment: ''
ignoretag: 'True'
maxoutbuf: '0'
minoutbuf: '0'
samples_per_second: samp_rate
type: complex
vlen: '1'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [344, 140.0]
rotation: 0
state: true
- name: zeromq_rep_sink_0
id: zeromq_rep_sink
parameters:
address: tcp://0.0.0.0:55555
affinity: ''
alias: ''
comment: ''
hwm: '-1'
pass_tags: 'False'
timeout: '100'
type: complex
vlen: '1'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [504, 216.0]
rotation: 0
state: true
connections:
- [analog_sig_source_x_0, '0', blocks_throttle_0, '0']
- [blocks_throttle_0, '0', zeromq_rep_sink_0, '0']
metadata:
file_format: 1

Related

Vertex AI (Kubeflow) pipeline doesn't pick-up metrics from file /mlpipeline-metrics.json

I am testing the metrics display in Vertex (Kubeflow UI for a pipeline running on GCP. I use the reusable component method with component specified in YAML, not function based components. The component is containerized and pushed to artifact registry and I reference it through YAML specification. I verified that the file '/mlpipeline-metrics.json' is correctly created in container but the KBF UI doesn't show the metric (accuracy in this case). I am able to export the metrics to outputPath but they are not also displayed into UI from the above local json.
I've ensured that both the metrics artifact is correctly named: "mlpipeline-metrics" and the file saved in the root of the container: "mlpipeline-metrics.json". Still the Kubeflow pipeline doesn't display the metrics on RUN view.
This is the code:
def produce_metrics(
mlpipeline_metrics):
accuracy = 0.9
metrics = {
'metrics': [{
'name': 'accuracy-score', #
'numberValue': accuracy, #
'format': "PERCENTAGE", #
}]
}
# save to mlpipeline-metrics.json file in the root
with open('/mlpipeline-metrics.json', 'w') as f:
json.dump(metrics, f)
# save to artifact path
with open(mlpipeline_metrics + '.json', 'w') as f:
json.dump(metrics, f)
def main_fn(arguments):
training_table_bq = arguments.training_table
validation_table_bq = arguments.validation_table
schema_dict = generate_schema(training_table_bq)
target_name = arguments.target_name
gcs_model_path = arguments.gcs_model_path
mlpipeline_metrics = arguments.mlpipeline_metrics
# run train evaluate
gcs_model_path = train_evaluate(training_table_bq, validation_table_bq, schema_dict, target_name, gcs_model_path, mlpipeline_metrics)
return gcs_model_path
if __name__ == '__main__':
parser = argparse.ArgumentParser(description = "train evaluate model")
parser.add_argument("--training_table", type = str, help = 'Name of the input training table')
parser.add_argument("--validation_table", type = str, help = 'Name of the input validation table')
parser.add_argument("--target_name", type = str, help = 'Name of the target variable')
parser.add_argument("--gcs_model_path", type = str, help = 'output directory where the model is saved')
parser.add_argument('--mlpipeline_metrics',
type=str,
required = False,
default='/mlpipeline-metrics.json',
help='output path for the file containing metrics JSON structure.')
args = parser.parse_args()
training.yaml
============
name: training
description: Scikit trainer. Receives the name of train and validation BQ tables. Train, evaluate and save a model.
inputs:
- {name: training_table, type: String, description: 'name of the BQ training table'}
- {name: validation_table, type: String, description: 'name of the BQ validation table'}
- {name: target_name, type: String, description: 'Name of the target variable'}
- {name: max_depth, type: Integer, description: 'max depth'}
- {name: learning_rate, type: Float, description: 'learning rate'}
- {name: n_estimators, type: Integer, description: 'n estimators'}
outputs:
- {name: gcs_model_path, type: OutputPath, description: 'output directory where the model is saved'}
- {name: MLPipeline_Metrics, type: Metrics, description: 'output directory where the metrics are saved'}
implementation:
container:
image: ..... train_comp:latest
command: [
/src/component/train.py,
--training_table, {inputValue: training_table},
--validation_table, {inputValue: validation_table},
--target_name, {inputValue: target_name},
--gcs_model_path, {outputPath: gcs_model_path},
--mlpipeline_metrics_path, {outputPath: MLPipeline_Metrics},
--max_depth, {inputValue: max_depth},
--learning_rate, {inputValue: learning_rate},
--n_estimators, {inputValue: n_estimators}
]`

how do you start a workflow from another workflow and retrieve the return value of called workflow

I am testing google workflow and would like to call a workflow from another workflow but as a separate process (not a subworkflow)
I am able to start the execution but currently unable to retrieve the return value. I receive instead an instance of the execution:
{
"argument": "null",
"name": "projects/xxxxxxxxxxxx/locations/us-central1/workflows/child-workflow/executions/9fb4aa01-2585-42e7-a79f-cfb4b57b22d4",
"startTime": "2020-12-09T01:38:07.073406981Z",
"state": "ACTIVE",
"workflowRevisionId": "000003-cf3"
}
parent-workflow.yaml
main:
params: [args]
steps:
- callChild:
call: http.post
args:
url: 'https://workflowexecutions.googleapis.com/v1beta/projects/my-project/locations/us-central1/workflows/child-workflow/executions'
auth:
type: OAuth2
scope: 'https://www.googleapis.com/auth/cloud-platform'
result: callresult
- returnValue:
return: ${callresult.body}
child-workflow.yaml:
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: CurrentDateTime
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${CurrentDateTime.body.dayOfTheWeek}
result: WikiResult
- returnOutput:
return: ${WikiResult.body[1]}
also as an added question how can create a dynamic url from a variable. ${} doesn't seem to work there
As Executions are async API calls, you need to POLL for the workflow to see when finished.
You can have the following algorithm:
main:
steps:
- callChild:
call: http.post
args:
url: ${"https://workflowexecutions.googleapis.com/v1beta/projects/"+sys.get_env("GOOGLE_CLOUD_PROJECT_ID")+"/locations/us-central1/workflows/http_bitly_secrets/executions"}
auth:
type: OAuth2
scope: 'https://www.googleapis.com/auth/cloud-platform'
result: workflow
- waitExecution:
call: CloudWorkflowsWaitExecution
args:
execution: ${workflow.body.name}
result: workflow
- returnValue:
return: ${workflow}
CloudWorkflowsWaitExecution:
params: [execution]
steps:
- init:
assign:
- i: 0
- valid_states: ["ACTIVE","STATE_UNSPECIFIED"]
- result:
state: ACTIVE
- check_condition:
switch:
- condition: ${result.state in valid_states AND i<100}
next: iterate
next: exit_loop
- iterate:
steps:
- sleep:
call: sys.sleep
args:
seconds: 10
- process_item:
call: http.get
args:
url: ${"https://workflowexecutions.googleapis.com/v1beta/"+execution}
auth:
type: OAuth2
result: result
- assign_loop:
assign:
- i: ${i+1}
- result: ${result.body}
next: check_condition
- exit_loop:
return: ${result}
What you see here is that we have a CloudWorkflowsWaitExecution subworkflow which will loop 100 times at most, also has a 10 second delay, it will stop when the workflow has finished, and returns the result.
The output is:
argument: 'null'
endTime: '2020-12-09T13:00:11.099830035Z'
name: projects/985596417983/locations/us-central1/workflows/call_another_workflow/executions/05eeefb5-60bb-4b20-84bd-29f6338fa66b
result: '{"argument":"null","endTime":"2020-12-09T13:00:00.976951808Z","name":"projects/985596417983/locations/us-central1/workflows/http_bitly_secrets/executions/2f4b749c-4283-4c6b-b5c6-e04bbcd57230","result":"{\"archived\":false,\"created_at\":\"2020-10-17T11:12:31+0000\",\"custom_bitlinks\":[],\"deeplinks\":[],\"id\":\"j.mp/2SZaSQK\",\"link\":\"//<edited>/2SZaSQK\",\"long_url\":\"https://cloud.google.com/blog\",\"references\":{\"group\":\"https://api-ssl.bitly.com/v4/groups/Bg7eeADYBa9\"},\"tags\":[]}","startTime":"2020-12-09T13:00:00.577579042Z","state":"SUCCEEDED","workflowRevisionId":"000001-478"}'
startTime: '2020-12-09T13:00:00.353800247Z'
state: SUCCEEDED
workflowRevisionId: 000012-cb8
in the result there is a subkey that holds the results from the external Workflow execution.
The best method is now the workflows.executions.run helper method, which formats the request and blocks until the workflow execution has completed:
- run_execution:
try:
call: googleapis.workflowexecutions.v1.projects.locations.workflows.executions.run
args:
workflow_id: ${workflow}
location: ${location} # Defaults to current location
project_id: ${project} # Defaults to current project
argument: ${arguments} # Arguments could be specified inline as a map instead.
result: r1
except:
as: e
steps: ... # handle a failed execution

Eclispe Milo handle missing Sever Nonce in ActivateSessionRequest

I use Eclipse Milo (0.2.3) in my prject for OPC UA communication. The OPC UA participants are a client (written using Eclipse Milo) and a server, which is running on a remote machine, and is not implemented using Milo).
I can connect the client to the server normally and if the remote server is shut down, I am able to reconnect the client automatically, as soon as the server is accessible again.
However, after updating the server software, the client can't reconnect any more and it floods the server with the following messages:
Create Session Request
The server is able to create a session
Activate Session Request
The server sends an Activate Session Response, in which the ServerNonce is missing and the service result is "bad"
This causes the client to send a new Create Session Request. This all happens multiple times within a second, which makes it impossible for the server to execute any other tasks then trying to create this session.
Are there any settings in Milo to specify the reconnection delay? Or is there any setting for sepcifying what should happen when receiving an empty ServerNonce?
The server's responses are as follows:
If the session can be activated:
OpcUa Binary Protocol
Message Type: MSG
Chunk Type: F
Message Size: 96
SecureChannelId: 1599759116
Security Token Id: 1
Security Sequence Number: 53
Security RequestId: 3
OpcUa Service : Encodeable Object
TypeId : ExpandedNodeId
NodeId EncodingMask: Four byte encoded Numeric (0x01)
NodeId Namespace Index: 0
NodeId Identifier Numeric: ActivateSessionResponse (470)
ActivateSessionResponse
ResponseHeader: ResponseHeader
Timestamp: Nov 16, 2018 14:05:47.974000000
RequestHandle: 1
ServiceResult: 0x00000000 [Good]
ServiceDiagnostics: DiagnosticInfo
EncodingMask: 0x00
.... ...0 = has symbolic id: False
.... ..0. = has namespace: False
.... .0.. = has localizedtext: False
.... 0... = has locale: False
...0 .... = has additional info: False
..0. .... = has inner statuscode: False
.0.. .... = has inner diagnostic info: False
StringTable: Array of String
ArraySize: 0
AdditionalHeader: ExtensionObject
TypeId: ExpandedNodeId
EncodingMask: 0x00
ServerNonce: ab...
Results: Array of StatusCode
ArraySize: 0
DiagnosticInfos: Array of DiagnosticInfo
ArraySize: 0
If the session can't be activated (after updating the server's software):
OpcUa Binary Protocol
Message Type: MSG
Chunk Type: F
Message Size: 64
SecureChannelId: 1599759041
Security Token Id: 1
Security Sequence Number: 61
Security RequestId: 11
OpcUa Service : Encodeable Object
TypeId : ExpandedNodeId
ActivateSessionResponse
ResponseHeader: ResponseHeader
Timestamp: Nov 16, 2018 12:49:08.235000000
RequestHandle: 222
ServiceResult: 0x80000000 [Bad]
ServiceDiagnostics: DiagnosticInfo
EncodingMask: 0x00
.... ...0 = has symbolic id: False
.... ..0. = has namespace: False
.... .0.. = has localizedtext: False
.... 0... = has locale: False
...0 .... = has additional info: False
..0. .... = has inner statuscode: False
.0.. .... = has inner diagnostic info: False
StringTable: Array of String
ArraySize: 0
AdditionalHeader: ExtensionObject
TypeId: ExpandedNodeId
EncodingMask: 0x00
ServerNonce: <MISSING>[OpcUa Null ByteString]
Results: Array of StatusCode
ArraySize: 0
DiagnosticInfos: Array of DiagnosticInfo
ArraySize: 0
Thank you in advance for your help.
This corner case you described where there's no delay between a failed re-activation and the subsequent re-creation is addressed on the dev/0.3 branch in this commit.
I might be able to back port it to 0.2.x next week if I have some spare time.
I don't think there are any workarounds you can use.

Unable to dump as "pure" YAML

ruamel.yaml==0.15.37
Python 3.6.2 :: Continuum Analytics, Inc.
Current code:
from ruamel.yaml import YAML
import sys
yaml = YAML()
kube_context = yaml.load('''
apiVersion: v1
clusters: []
contexts: []
current-context: ''
kind: Config
preferences: {}
users: []
''')
kube_context['users'].append({'name': '{username}/{cluster}'.format(username='test', cluster='test'), 'user': {'token': 'test'}})
kube_context['clusters'].append({'name': 'test', 'cluster': {'server': 'URL:443'}})
kube_context['contexts'].append({'name': 'test', 'context': {'user': 'test', 'cluster': 'test'}})
yaml.dump(kube_context, sys.stdout)
My yaml.dump() is producing output that contains the list and dict objects, instead of being fully expanded.
Current output:
apiVersion: v1
clusters: [{name: test, cluster: {server: URL:443}}]
contexts: [{name: test, context: {user: test, cluster: test}}]
current-context: ''
kind: Config
preferences: {}
users: [{name: test/test, user: {token: test}}]
What do I need to do in order to have yaml.dump() output fully expanded?
Expected output:
apiVersion: v1
clusters:
- name: test
cluster:
server: URL:443
contexts:
- name: test
context:
user: test
cluster: test
current-context: ''
kind: Config
preferences: {}
users:
- name: test/test
user:
token: test
ruamel.yaml, when using the default YAML() or YAML(typ='rt') will preserve the flow- or block style of sequences and mappings. There is no way to make a block style empty sequence or empty mapping and your [] and {} are therefore tagged as flow style when loaded.
Flow style can only contain flow style (whereas block style can contain block style or flow style) (YAML 1.2 spec 8.2.3):
YAML allows flow nodes to be embedded inside block collections (but not vice-versa).
Because of that, the dict/mapping data that you insert in the (flow-style) list/sequence will also be represented as flow-style.
If you want everything to be block style (what you call "expanded" mode), you can explicitly set that by calling the .set_block_style() method on the .fa attribute (which is only available on the collections, hence the try/except):
from ruamel.yaml import YAML
import sys
yaml = YAML()
kube_context = yaml.load('''
apiVersion: v1
clusters: []
contexts: []
current-context: ''
kind: Config
preferences: {}
users: []
''')
kube_context['users'].append({'name': '{username}/{cluster}'.format(username='test', cluster='test'), 'user': {'token': 'test'}})
kube_context['clusters'].append({'name': 'test', 'cluster': {'server': 'URL:443'}})
kube_context['contexts'].append({'name': 'test', 'context': {'user': 'test', 'cluster': 'test'}})
for k in kube_context:
try:
kube_context[k].fa.set_block_style()
except AttributeError:
pass
yaml.dump(kube_context, sys.stdout)
this gives:
apiVersion: v1
clusters:
- name: test
cluster:
server: URL:443
contexts:
- name: test
context:
user: test
cluster: test
current-context: ''
kind: Config
preferences: {}
users:
- name: test/test
user:
token: test
Please note that it is not necessary to set yaml.default_flow_style = False in the default round-trip-mode; and that although block-style has been set for the value of key preferences, it is represented flow style as there is no other way to represent an empty mapping.
The output is „pure“ YAML. You want the nodes to be presented in block style (indentation-based) as opposed to the current flow style ([]{}-based). Here's how to do that:
yaml = YAML(typ="safe")
yaml.default_flow_style = False
(Note Athon's comment on the typ below; you need to set it to safe or unsafe so that the RoundTripLoader does not set the style of the empty sequences)

Bukkit yaml checker

Heres my code it says theres something wrong with one of the mapping values when I put it in the yaml checker.. Note: I took the addresses out as they are very confidential, it shouldn't affect anything.)
groups:
md_5:
- admin
disabled_commands:
- disabledcommandhere
player_limit: -1
stats: 34cce1fc-17ab-4156-bb9a-a1c06151137d
permissions:
default:
- bungeecord.command.server
- bungeecord.command.list
admin:
- bungeecord.command.alert
- bungeecord.command.end
- bungeecord.command.ip
- bungeecord.command.reload
listeners:
- max_players: -1
fallback_server: hub
host: 0.0.0.0:25577
bind_local_address: true
ping_passthrough: false
tab_list: GLOBAL_PING
default_server: hub
forced_hosts:
pvp.md-5.net: pvp
tab_size: 60
force_default_server: false
motd: '&1Another Bungee server'
query_enabled: false
query_port: 25577
timeout: 30000
connection_throttle: 4000
servers:
Hub:
address: 198.50.128.131:25565
restricted: false
motd: '&1&l>&d&l>&r&b&lWelcome to &6&l&NFooseNetwork&1&l<&d&L<'
ip_forward: false
online_mode: true
Skyblock:
address: 198.50.128.133:25565
restricted: false
motd: ''
ip_forward: false
online_mode: true
Factions:
address: 198.50.128.143:25565
motd: ''
ip_forward: false
online_mode: true
The problem is here:
online_mode: true
Skyblock:
address: 198.50.128.133:25565
restricted: false
motd: ''
The mapping that begins with the Skyblock key is indented under the online_mode key, which would make it the value of that key, but that key already has the value true.
A few lines later you have a second online_mode key—duplicate keys are not allowed, although not all parsers are strict about this—and you repeat the same error as above with Factions.
I'm not certain, but I think what you want is something like this:
servers:
Hub:
address: 198.50.128.131:25565
restricted: false
motd: '&1&l>&d&l>&r&b&lWelcome to &6&l&NFooseNetwork&1&l<&d&L<'
ip_forward: false
online_mode: true
Skyblock:
address: 198.50.128.133:25565
restricted: false
motd: ''
ip_forward: false
online_mode: true
Factions:
address: 198.50.128.143:25565
motd: ''
ip_forward: false
online_mode: true
Here the value of the servers key is a mapping with three keys (Hub, Skyblock and Factions), each of whose values is in turn a mapping.

Resources