How to solve validating a 3 layered YAML file using Cerberus? - validation

I have a YAML config file and I want to validate it using Cerberus. The problem is my YAML file is a kind of 3 layered dictionaries and it seems the validation function does not work when we have more than 2 nestings. As an example when I run the following code:
a_config = {'dict1': {'dict11': {'dict111': 'foo111',
'dict112': 'foo112'},
'dict12': {'dict121': 'foo121',
'dict122': 'foo122'}},
'dict2': 'foo2'}
a_simple_config = {'dict1': {'dict11': 'foo11'}, 'dict2': 'foo2'}
print(type(a_config))
print(type(a_simple_config))
simple_schema = {'dict1': {'type': 'dict', 'schema': {'dict11': {'type': 'string'}}}, 'dict2': {'type': 'string'}}
v_simple = Validator(simple_schema)
schema = {
'dict1': {
'type': 'dict',
'schema': {
'dict11': {
'type': 'dict',
'schema': {
'dict111': {'type': 'string'},
'dict112': {'type': 'string'}
}
}
}
},
'dict2': {'type': 'string'}
}
v = Validator(schema)
print(v.validate(a_config, schema))
print(v.errors)
I get this:
True
{}
False
{'dict1': [{'dict12': ['unknown field']}]}
I think validating 3 layered files is not supported. So my only idea is to try to validate it from layer 2 and if all of them are valid then conclude my file is valid. I wish to know am I making some mistake with writing my schema when I have 3 layers? or, is there exists a better idea for validating such files?
Edit: #flyx claimed that the problem is in the definition of dict12, so I decided to replace it. AND NOTHING changed. I again have the same output!

Related

Nipyapi create_controller ends with AssertionError all the time

So, I have been struggling with a problem in the last couple of days for which I could really use come guidance.
Short version, I am trying to create a AVRO Reader Controller service in NiFi, using the nipyapi library.
Now, based on the documentation, we have:
nipyapi.canvas.create_controller(parent_pg, controller, name=None)
parent_pg = Target Parent PG
controller = Type of Controller to create, found via the list_all_controller_types method
name = Name for the new Controller as a String (Optional)
First of all I have tried to retrieve the list of all controller types and the following is the output for the controller I am trying to create:
{'bundle': {'artifact': 'nifi-record-serialization-services-nar',
'group': 'org.apache.nifi',
'version': '1.15.3'},
'controller_service_apis': [{'bundle': {'artifact': 'nifi-standard-services-api-nar',
'group': 'org.apache.nifi',
'version': '1.15.3'},
'type': 'org.apache.nifi.serialization.RecordReaderFactory'}],
'deprecation_reason': None,
'description': 'Parses Avro data and returns each Avro record as an separate '
'Record object. The Avro data may contain the schema itself, '
'or the schema can be externalized and accessed by one of the '
"methods offered by the 'Schema Access Strategy' property.",
'explicit_restrictions': None,
'restricted': False,
'tags': ['comma',
'reader',
'record',
'values',
'delimited',
'separated',
'parse',
'row',
'avro'],
'type': 'org.apache.nifi.avro.AvroReader',
'usage_restriction': None}
With this information, based on the documentation, I have tried to create this controller service:
root_id = canvas.get_root_pg_id() # retrieve the parent process group where I want to create the controller service
nipyapi.canvas.create_controller(root_id, 'org.apache.nifi.avro.AvroReader' , name="[TEST][PROJECT1] AVRO Reader - Table 1")
Unfortunately, I keep on getting an AssertionError within the create_controller function for "assert isinstance(controller, nipyapi.nifi.DocumentedTypeDTO) at line 1087"
Has anyone else encountered a similar issue?
And if so, how did you manage to fix it?
Or am I calling the create_controller function with invalid parameters?
Thank you.

PyObjc: implement NSXPCInterface

I am writing an Extension and main app is in PyObjc. I want to setup communication between main app and Extension.
With ref to link I tried writing a Protocol.
SampleExtensionProtocol = objc.formal_protocol('SampleExtensionProtocol', (), [
objc.selector(None, b"upperCaseString:withReply:", signature=b"v#:##",isRequired=0),
objc.selector(None, b"setEnableTemperProof:withReply:", signature=b"v#:##",isRequired=0),
])
Connection object is created.
connection = NSXPCConnection.alloc().initWithMachServiceName_options_("com.team.extension",NSXPCConnectionPrivileged)
Registered Metadata as well.
objc.registerMetaDataForSelector(b'NSObject', b'upperCaseString:withReply:', {
'arguments': {
3: {
'callable': {
'retval': {'type': b'#'},
'arguments': {
0: {'type': b'^v'},
1: {'type': b'i'},
},
},
}
}
})
objc.registerMetaDataForSelector(b'NSObject', b'setEnableTemperProof:withReply:', {
'arguments': {
3: {
'callable': {
'retval': {'type': b'#'},
'arguments': {
0: {'type': b'^v'},
1: {'type': b'i'},
},
},
}
}
})
But while creating an Interface getting error.
mySvcIF = Foundation.NSXPCInterface.interfaceWithProtocol_(SampleExtensionProtocol)
ValueError: NSInvalidArgumentException - NSXPCInterface: Unable to get extended method signature from Protocol data (SampleExtensionProtocol / upperCaseString:withReply:). Use of clang is required for NSXPCInterface.
It is not possible to define a protocol in Python that can be used with NSXPCInterface because that class needs "extended method signatures" which cannot be registered using the public API for programmatically creating protocols in the Objective-C runtime.
As a workaround you have to define the protocol in a small C extension that defines to protocol. The PyObjC documentation describes a small gotcha for that at https://pyobjc.readthedocs.io/en/latest/notes/using-nsxpcinterface.html, including how to avoid that problem.

typeProviders with private GKE clusters

I am new to google cloud manager (GCM) and am writing some code in order to practice. I have read some interesting articles that detail how I can use deploymentmanager.v2beta.typeprovider in order to extend GCM and use it to configure Kubernetes objects themselves as additional deployment. This is a very appealing behavior for extension and seems to open up great opportunities to extend declarative automation of any API which is cool.
I am attempting to create a private node/public endpoint GKE cluster that is managed by custom typeProvider resources which correspond to GKE api calls. It seems that public node/public endpoint GKE cluster is the only way to support GCM custom typeProviders and this seems wrong considering private node/public endpoint GKE configuration possible.
It seems it would be weird to not have deploymentmanager.v2beta.typeprovider support a private node/public endpoint GKE configuration.
as an aside...
I feel a private node/private endpoint/Cloud Endpoint to expose it to the GCM typeProvider public API endpoint requirement should also be a valid architecture but I have yet to test.
Using the following code
def GenerateConfig(context):
# Some constant type vars that are not really
resources = []
outputs = []
gcp_type_provider = 'deploymentmanager.v2beta.typeProvider'
extension_prefix = 'k8s'
api_version = 'v1'
kube_initial_auth = {
'username': 'luis',
'password': 'letmeinporfavors',
"clientCertificateConfig": {
'issueClientCertificate': True
}
}
# EXTEND API TO CONTROL KUBERNETES BEGIN
kubernetes_exposed_apis = [
{
'name': '{}-{}-api-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'api/v1'
},
{
'name': '{}-{}-apps-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/apps/v1'
},
{
'name': '{}-{}-rbac-v1-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/rbac.authorization.k8s.io/v1'
},
{
'name': '{}-{}-v1beta1-extensions-type'.format(
context.env['deployment'],
extension_prefix
),
'endpoint': 'apis/extensions/v1beta1'
}
]
for exposed_api in kubernetes_exposed_apis:
descriptor_url = 'https://{}/swaggerapi/{}'.format(
'$(ref.{}-k8s-cluster.endpoint)'.format(
context.env['deployment']
),
exposed_api['endpoint']
)
resources.append(
{
'name': exposed_api['name'],
'type': gcp_type_provider,
'properties': {
'options': {
'validationOptions': {
'schemaValidation': 'IGNORE_WITH_WARNINGS'
},
'inputMappings': [
{
'fieldName': 'name',
'location': 'PATH',
'methodMatch': '^(GET|DELETE|PUT)$',
'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
},
{
'fieldName': 'metadata.name',
'location': 'BODY',
'methodMatch': '^(PUT|POST)$',
'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
},
{
'fieldName': 'Authorization',
'location': 'HEADER',
'value': '$.concat("Bearer ", $.googleOauth2AccessToken())'
}
],
},
'descriptorUrl': descriptor_url
},
}
)
# EXTEND API TO CONTROL KUBERNETES END
# NETWORK DEFINITION BEGIN
resources.append(
{
'name': "{}-network".format(context.env['deployment']),
'type': "compute.{}.network".format(api_version),
'properties': {
'description': "{} network".format(context.env['deployment']),
'autoCreateSubnetworks': False,
'routingConfig': {
'routingMode': 'REGIONAL'
}
},
}
)
resources.append(
{
'name': "{}-subnetwork".format(context.env['deployment']),
'type': "compute.{}.subnetwork".format(api_version),
'properties': {
'description': "{} subnetwork".format(
context.env['deployment']
),
'network': "$(ref.{}-network.selfLink)".format(
context.env['deployment']
),
'ipCidrRange': '10.64.1.0/24',
'region': 'us-east1',
'privateIpGoogleAccess': True,
'enableFlowLogs': False,
}
}
)
# NETWORK DEFINITION END
# EKS DEFINITION BEGIN
resources.append(
{
'name': "{}-k8s-cluster".format(context.env['deployment']),
'type': "container.{}.cluster".format(api_version),
'properties': {
'zone': 'us-east1-b',
'cluster': {
'description': "{} kubernetes cluster".format(
context.env['deployment']
),
'privateClusterConfig': {
'enablePrivateNodes': False,
'masterIpv4CidrBlock': '10.0.0.0/28'
},
'ipAllocationPolicy': {
'useIpAliases': True
},
'nodePools': [
{
'name': "{}-cluster-pool".format(
context.env['deployment']
),
'initialNodeCount': 1,
'config': {
'machineType': 'n1-standard-1',
'oauthScopes': [
'https://www.googleapis.com/auth/compute',
'https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/logging.write',
'https://www.googleapis.com/auth/monitoring'
],
},
'management': {
'autoUpgrade': False,
'autoRepair': True
}
}],
'masterAuth': kube_initial_auth,
'loggingService': 'logging.googleapis.com',
'monitoringService': 'monitoring.googleapis.com',
'network': "$(ref.{}-network.selfLink)".format(
context.env['deployment']
),
'clusterIpv4Cidr': '10.0.0.0/14',
'subnetwork': "$(ref.{}-subnetwork.selfLink)".format(
context.env['deployment']
),
'enableKubernetesAlpha': False,
'resourceLabels': {
'purpose': 'expiramentation'
},
'networkPolicy': {
'provider': 'CALICO',
'enabled': True
},
'initialClusterVersion': 'latest',
'enableTpu': False,
}
}
}
)
outputs.append(
{
'name': '{}-cluster-endpoint'.format(
context.env['deployment']
),
'value': '$(ref.{}-k8s-cluster.endpoint)'.format(
context.env['deployment']
),
}
)
# EKS DEFINITION END
# bring it all together
template = {
'resources': resources,
'outputs': outputs
}
# give it to google
return template
if __name__ == '__main__':
GenerateConfig({})
Also, I will note the subsequent hello world template which uses the created typeProviders above.
def current_config():
'''
get the current configuration
'''
return {
'name': 'atrium'
}
def GenerateConfig(context):
resources = []
conf = current_config()
resources.append(
{
'name': '{}-svc'.format(conf['name']),
'type': "{}/{}-k8s-api-v1-type:/api/v1/namespaces/{}/pods".format(
context.env['project'],
conf['name'],
'{namespace}'
),
'properties': {
'namespace': 'default',
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'name': 'hello-world',
},
'spec': {
'restartPolicy': 'Never',
'containers': [
{
'name': 'hello',
'image': 'ubuntu:14.04',
'command': ['/bin/echo', 'hello', 'world'],
}
]
}
}
}
)
template = {
'resources': resources,
}
return template
if __name__ == '__main__':
GenerateConfig({})
If I leave enablePrivateNodes as False
'privateClusterConfig': {
'enablePrivateNodes': False,
'masterIpv4CidrBlock': '10.0.0.0/28'
}
I get this as a response
~/code/github/gcp/expiramentation/atrium_gcp_infra 24s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14]...done.
Delete operation operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14 completed successfully.
Waiting for delete [operation-1562105393399-58cba01134528-be47dc30-755cb106]...done.
Delete operation operation-1562105393399-58cba01134528-be47dc30-755cb106 completed successfully.
The fingerprint of the deployment is IiWcrdbZA5MedNlJLIicOg==
Waiting for create [operation-1562105786056-58cba187abee2-5d761e87-b446baca]...done.
Create operation operation-1562105786056-58cba187abee2-5d761e87-b446baca completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-k8s-api-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-apps-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-cluster container.v1.cluster COMPLETED []
atrium-k8s-rbac-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-v1beta1-extensions-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-network compute.v1.network COMPLETED []
atrium-subnetwork compute.v1.subnetwork COMPLETED []
The fingerprint of the deployment is QJ2NS5EhjemyQJThUWYNHA==
Waiting for create [operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb]...done.
Create operation operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-svc atrium-244423/atrium-k8s-api-v1-type:/api/v1/namespaces/{namespace}/pods COMPLETED []
~/code/github/gcp/expiramentation/atrium_gcp_infra 13m 48s
this is a good response and my custom typeProvider resource creates correctly using the API's of my freshly created cluster.
If I make this cluster have private nodes however... with
'privateClusterConfig': {
'enablePrivateNodes': True,
'masterIpv4CidrBlock': '10.0.0.0/28'
},
I fail with
~/code/github/gcp/expiramentation/atrium_gcp_infra 56s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562106572016-58cba47538c93-d34c17fc-8b863765]...done.
Delete operation operation-1562106572016-58cba47538c93-d34c17fc-8b863765 completed successfully.
Waiting for delete [operation-1562106592237-58cba4888184f-a5bc3135-4e662eed]...done.
Delete operation operation-1562106592237-58cba4888184f-a5bc3135-4e662eed completed successfully.
The fingerprint of the deployment is dk5nh_u5ZFFvYO-pCXnFBg==
Waiting for create [operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd]...done.
Create operation operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd completed successfully.
NAME TYPE STATE ERRORS INTENT
atrium-k8s-api-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-apps-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-cluster container.v1.cluster COMPLETED []
atrium-k8s-rbac-v1-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-k8s-v1beta1-extensions-type deploymentmanager.v2beta.typeProvider COMPLETED []
atrium-network compute.v1.network COMPLETED []
atrium-subnetwork compute.v1.subnetwork COMPLETED []
The fingerprint of the deployment is 4RnscwpcYTtS614VXqtjRg==
Waiting for create [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]: errors:
- code: ERROR_PROCESSING_REQUEST
message: 'Error fetching URL https://10.0.0.2:443/api/v1/namespaces/default/pods,
reason: ERROR_EXCLUDED_IP'
The 10.0.0.2 seems to be the private endpoint of my cluster. I am having a difficult time tracking down where I can override the https://10.0.0.2:443/api/v1/namespaces/default/pods url host such that it would attempt to contact the publicEndpoint rather than the privateEndpoint.
If this call were to go out to the public endpoint it would create successfully I believe. As an interesting aside the typeProvider declarations in the descriptorUrl do attempt to hit the publicEndpoint of the cluster and are successful in doing so. However, despite this indication the creation of the actual api resources such as the hello world example attempt to interface with the private endpoint.
I feel this behavior should be overridable somewhere but I am failing to find this clue.
I have tried both a working public node configuration and a non-working private node configuration
So I ran into the same problem writing [this] (https://github.com/Aahzymandius/k8s-workshops/tree/master/8-live-debugging). While trying to debug this I ran into 2 issues
The public endpoont is used initially to fetch the swagger definition of the various k8s apis and resources, however, in that response is included the private endpoint for the API server which causes the k8s type to try to use that ip.
While further trying debug that error plus a new one I was hitting, I discovered that gke 1.14.x or later does not support insecure calls to the k8s api which caused the k8s type to fail even with fully public clusters.
Since the functionality seems to fail with newer versions of GKE, I just stopped trying to debug it. Though I'd recommend reporting an issue on the github repo for it. This functionality would be great to keep going

After Image Uploading, image uri is null for iOS alone in React Native

In my React Native app, I have added an functionality to upload multiple images, which will be stored as image[] including uri.
This works perfectly for Android.
But for iOS, the image[] is created also contains some data but it is entirely different from android. And for uri null value only present.
Please help me to solve this issue.! (Note: Android should not get affected by change.)
Thanks in advance.!
Images Array: (For Android)
[
{ 'file': 'content://media/external/images/media/30993',
'size': 125434,
'uri': 'content://media/external/images/media/30993',
'isDirectory': false,
'md5': '041550fe959f36d1b247bb6b2eaa3272',
'exists': true },
{ 'file': 'content://media/external/images/media/30988',
'size': 541148,
'uri': 'content://media/external/images/media/30988',
'isDirectory': false,
'md5': '39bf35dfcf0852c5412205195a395b29',
'exists': true
}
]
Images Array: (For iOS)
[
{
'file': 'assets-library://asset/asset.JPG?id=C2FBB68D-2012-4696-8648-D8990F72BF77&ext=JPG',
'size': 125434,
'modificationTime': 1552019118.1320686,
'uri': null,
'isDirectory': 0,
'md5': '041550fe959f36d1b247bb6b2eaa3272',
'exists': 1
},
{
'file': 'assets-library://asset/asset.JPG?id=98EBA95A-254E-40F4-8E1D-C355D8795777&ext=JPG',
'size': 541148,
'modificationTime': 1552019117.7241733,
'uri': null,
'isDirectory': 0,
'md5': '39bf35dfcf0852c5412205195a395b29',
'exists': 1
}
]
[Updated]
I need to get the absolute file path from this url 'assets-library://asset/asset.JPG?id=98EBA95A-254E-40F4-8E1D-C355D8795777&ext=JPG' , which is similar to 'content://media/external/images/media/30988'.
Please help me to resolve this issue.!
Thanks in advance.!
I had this exact same issue.
The solution is to set the uri to the 'file' attribute.
This is how you can append the image into formData correctly for both iOS and Android:
let uri = image.file;
// extract the filetype
let fileType = uri.substring(uri.lastIndexOf(".") + 1);
formData.append('uploads[]', {
uri,
name: `photo.${fileType}`,
type: `image/${fileType}`,
});
You can't remove the additional data on the iOS filename "?id=98EBA95A-254E-40F4-8E1D-C355D8795777&ext=JPG'" at this point, otherwise it won't upload. You can remove it at the server side though.

Why isn't fineUploader sending an x-amz-credential property among the request conditions?

My server-side policy signing code is failing on this line:
credentialCondition = conditions[i]["x-amz-credential"];
(Note that this code is taken from the Node example authored by the FineUploader maintainer. I have only changed it by forcing it to use version 4 signing without checking for a version parameter.)
So it's looking for an x-amz-credential parameter in the request body, among the other conditions, but it isn't there. I checked the request in the dev tools and the conditions look like this:
0: {acl: "private"}
1: {bucket: "menu-translator"}
2: {Content-Type: "image/jpeg"}
3: {success_action_status: "200"}
4: {key: "4cb34913-f9dc-40db-aecc-a9fdf518a334.jpg"}
5: {x-amz-meta-qqfilename: "f86d03fb-1b62-4073-9458-17e1dfd8b3ae.jpg"}
As you can see, no credentials. Here is my client-side options code:
var uploader = new qq.s3.FineUploader({
debug: true,
element: document.getElementById('uploader'),
request: {
endpoint: 'menu-translator.s3.amazonaws.com',
accessKey: 'mykey'
},
signature: {
endpoint: '/s3signaturehandler'
},
iframeSupport: {
localBlankPagePath: '/views/blankForIE9Support.html'
},
cors: {
expected: true,
sendCredentials: true
},
uploadSuccess: {
endpoint: 'success.html'
}
});
What am I missing here?
I fixed this by altering my options code in one small way:
signature: {
endpoint: '/s3signaturehandler',
version: 4
},
I specified version: 4 in the signature section. Not that this is documented anywhere, but apparently the client-side code uses this as a flag for whether or not to send along the key information needed by the server.

Resources