I am writing an Extension and main app is in PyObjc. I want to setup communication between main app and Extension.
With ref to link I tried writing a Protocol.
SampleExtensionProtocol = objc.formal_protocol('SampleExtensionProtocol', (), [
objc.selector(None, b"upperCaseString:withReply:", signature=b"v#:##",isRequired=0),
objc.selector(None, b"setEnableTemperProof:withReply:", signature=b"v#:##",isRequired=0),
])
Connection object is created.
connection = NSXPCConnection.alloc().initWithMachServiceName_options_("com.team.extension",NSXPCConnectionPrivileged)
Registered Metadata as well.
objc.registerMetaDataForSelector(b'NSObject', b'upperCaseString:withReply:', {
'arguments': {
3: {
'callable': {
'retval': {'type': b'#'},
'arguments': {
0: {'type': b'^v'},
1: {'type': b'i'},
},
},
}
}
})
objc.registerMetaDataForSelector(b'NSObject', b'setEnableTemperProof:withReply:', {
'arguments': {
3: {
'callable': {
'retval': {'type': b'#'},
'arguments': {
0: {'type': b'^v'},
1: {'type': b'i'},
},
},
}
}
})
But while creating an Interface getting error.
mySvcIF = Foundation.NSXPCInterface.interfaceWithProtocol_(SampleExtensionProtocol)
ValueError: NSInvalidArgumentException - NSXPCInterface: Unable to get extended method signature from Protocol data (SampleExtensionProtocol / upperCaseString:withReply:). Use of clang is required for NSXPCInterface.
It is not possible to define a protocol in Python that can be used with NSXPCInterface because that class needs "extended method signatures" which cannot be registered using the public API for programmatically creating protocols in the Objective-C runtime.
As a workaround you have to define the protocol in a small C extension that defines to protocol. The PyObjC documentation describes a small gotcha for that at https://pyobjc.readthedocs.io/en/latest/notes/using-nsxpcinterface.html, including how to avoid that problem.
Related
I'm facing an error on file upload with GraphQL Upload using the ReadStream function:
error: 17:10:32.466+02:00 [ExceptionsHandler] Maximum call stack size exceeded
error: 17:10:32.467+02:00 [graphql] Maximum call stack size exceeded RangeError: Maximum call stack size exceeded
at ReadStream.open (/Users/xxxx/Documents/Xxxx/xxxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:80:7)
at _openReadFs (internal/fs/streams.js:117:12)
at ReadStream.<anonymous> (internal/fs/streams.js:110:3)
at ReadStream.deprecated [as open] (internal/util.js:96:15)
at ReadStream.open (/Users/xxxx/Documents/Xxxxx/xxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:90:11)
at _openReadFs (internal/fs/streams.js:117:12)
at ReadStream.<anonymous> (internal/fs/streams.js:110:3)
at ReadStream.deprecated [as open] (internal/util.js:96:15)
at ReadStream.open (/Users/xxxx/Documents/Xxxxx/xxxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:90:11)
at _openReadFs (internal/fs/streams.js:117:12) {"stack":"RangeError: Maximum call stack size exceeded\n at ReadStream.open (/Users/xxxx/Documents/Xxxxxx/xxxxx/xxxx-api/node_modules/fs-capacitor/lib/index.js:80:7)\n at _openReadFs (internal/fs/streams.js:117:12)\n at ReadStream.<anonymous> (internal/fs/streams.js:110:3)\n at ReadStream.deprecated [as open] (internal/util.js:96:15)\n at ReadStream.open (/Users/xxxxx/Documents/Xxxxx/xxxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:90:11)\n at _openReadFs (internal/fs/streams.js:117:12)\n at ReadStream.<anonymous> (internal/fs/streams.js:110:3)\n at ReadStream.deprecated [as open] (internal/util.js:96:15)\n at ReadStream.open (/Users/xxxx/Documents/Xxxxxx/xxxx/xxxxx-api/node_modules/fs-capacitor/lib/index.js:90:11)\n at _openReadFs (internal/fs/streams.js:117:12)"}
(node:44569) [DEP0135] DeprecationWarning: ReadStream.prototype.open() is deprecated
(Use `node --trace-deprecation ...` to show where the warning was created)
Here is the function I'm using to upload a file:
public async cleanUpload(upload: GraphqlUpload, oldName?: string) {
let uploadResponse: FileInfo;
try {
if (oldName) {
this.safeRemove(oldName);
}
uploadResponse = await this.uploadFile(
{
fileName: upload.filename,
stream: upload.createReadStream(),
mimetype: upload.mimetype,
},
{ isPublic: true, filter: imageFilterFunction },
);
return uploadResponse;
} catch (e) {
this.logger.error('unable to upload', e);
if (uploadResponse) {
this.safeRemove(uploadResponse.fileName);
}
throw e;
}
}
The solution was to downgrade the Node version to 12.18 from 14.17.
To keep using Node 14.17 what you can do is to disable Apollo's internal upload and use graphql-upload
Please see this comment which outline the approach quoted here.
For any future readers, here is how to fix the issue once and for all.
The problem is that #nestjs/graphql's dependency, apollo-server-core, depends on an old version of graphql-upload (v8.0) which has conflicts with newer versions of Node.js and various packages. Apollo Server v2.21.0 seems to have fixed this but #nestjs/graphql is still on v2.16.1. Furthermore, Apollo Server v3 will be removing the built-in graphql-upload.
The solution suggested in this comment is to disable Apollo Server's built-in handling of uploads and use your own. This can be done in 3 simple steps:
1. package.json
Remove the fs-capacitor and graphql-upload entries from the resolutions section if you added them, and install the latest version of graphql-upload (v11.0.0 at this time) package as a dependency.
2. src/app.module.ts
Disable Apollo Server's built-in upload handling and add the graphqlUploadExpress middleware to your application.
import { graphqlUploadExpress } from "graphql-upload"
import { MiddlewareConsumer, Module, NestModule } from "#nestjs/common"
#Module({
imports: [
GraphQLModule.forRoot({
uploads: false, // disable built-in upload handling
}),
],
})
export class AppModule implements NestModule {
configure(consumer: MiddlewareConsumer) {
consumer.apply(graphqlUploadExpress()).forRoutes("graphql")
}
}
3. src/blog/post.resolver.ts (example resolver)
Remove the GraphQLUpload import from apollo-server-core and import from graphql-upload instead
// import { GraphQLUpload } from "apollo-server-core" <-- remove this
import { FileUpload, GraphQLUpload } from "graphql-upload"
#Mutation(() => Post)
async postCreate(
#Args("title") title: string,
#Args("body") body: string,
#Args("attachment", { type: () => GraphQLUpload }) attachment: Promise<FileUpload>,
) {
const { filename, mimetype, encoding, createReadStream } = await attachment
console.log("attachment:", filename, mimetype, encoding)
const stream = createReadStream()
stream.on("data", (chunk: Buffer) => /* do stuff with data here */)
}
I have a YAML config file and I want to validate it using Cerberus. The problem is my YAML file is a kind of 3 layered dictionaries and it seems the validation function does not work when we have more than 2 nestings. As an example when I run the following code:
a_config = {'dict1': {'dict11': {'dict111': 'foo111',
'dict112': 'foo112'},
'dict12': {'dict121': 'foo121',
'dict122': 'foo122'}},
'dict2': 'foo2'}
a_simple_config = {'dict1': {'dict11': 'foo11'}, 'dict2': 'foo2'}
print(type(a_config))
print(type(a_simple_config))
simple_schema = {'dict1': {'type': 'dict', 'schema': {'dict11': {'type': 'string'}}}, 'dict2': {'type': 'string'}}
v_simple = Validator(simple_schema)
schema = {
'dict1': {
'type': 'dict',
'schema': {
'dict11': {
'type': 'dict',
'schema': {
'dict111': {'type': 'string'},
'dict112': {'type': 'string'}
}
}
}
},
'dict2': {'type': 'string'}
}
v = Validator(schema)
print(v.validate(a_config, schema))
print(v.errors)
I get this:
True
{}
False
{'dict1': [{'dict12': ['unknown field']}]}
I think validating 3 layered files is not supported. So my only idea is to try to validate it from layer 2 and if all of them are valid then conclude my file is valid. I wish to know am I making some mistake with writing my schema when I have 3 layers? or, is there exists a better idea for validating such files?
Edit: #flyx claimed that the problem is in the definition of dict12, so I decided to replace it. AND NOTHING changed. I again have the same output!
I'm developing a web application with a react frontend and a .NET CORE 3.1 backend and was asked to add Azure AD single sign on capabilities. I'm using the react-aad-msal library (https://www.npmjs.com/package/react-aad-msal). I'm calling MsalAuthProvider.getAccessToken() and get this error:
Can't construct an AccessTokenResponse from a AuthResponse that has a token type of "id_token".
Can anyone help me?
Anyone? Btw. getAccessToken() is actually inside the standard msal library, if that helps.
I found a solution myself by going into packages.json and lowering the version number on "msal" in "dependencies" like this:
"msal": "~1.3.0",
Change the scopes in authProvider.
export const authProvider = new MsalAuthProvider(
{
auth: {
authority: 'https://login.microsoftonline.com/5555555-5555-5555-5555-555555555555',
clientId: 'AAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA',
postLogoutRedirectUri: 'http://localhost:3000/signin',
redirectUri: 'http://localhost:3000/signin',
validateAuthority: true,
navigateToLoginRequestUrl: false
},
system: {
logger: new Logger(
(logLevel, message, containsPii) => {
console.log("[MSAL]", message);
},
{
level: LogLevel.Verbose,
piiLoggingEnabled: false
}
)
},
cache: {
cacheLocation: "sessionStorage",
storeAuthStateInCookie: true
}
},
{
scopes: ["openid", "profile", "user.read"] // <<<-----------|
},
{
loginType: LoginType.Popup,
tokenRefreshUri: window.location.origin + "/auth.html"
}
);
I'm using rhea (https://github.com/amqp/rhea), a node.js library to develop AMQP 1.0 clients.
I'm trying to adapt https://github.com/amqp/rhea/tree/master/examples/selector example using an x-match expression instead of a JMS expression.
The purpose is to implement an header routing mechanism based on a AMQP 1.0 compliant broker (ActiveMQ, Qpid, ...).
I tried this code in the appropriate section in recv.js:
connection.open_receiver({
source: {
address: 'amq.match',
filter: {
'x-match': 'all',
value: {
'nat': 'it',
'prod': 'a22'
}
}
}
})
Received a connection error "Expected value type is 'Filter' but got 'String' amqp:decode-error" from Qpid Java broker (rel. 7.1.0).
According to this answer received on rhea github repo:
https://github.com/amqp/rhea/issues/200#issuecomment-469220880
The filter needs to be a described value. Try something like this:
connection.open_receiver({
source: {
address: 'amq.match',
filter: {
'foo': amqp_types.wrap_described({
'nat': 'it',
'prod': 'a22',
'x-match': 'all'
}, 0x468C00000002)
}
}
});
where:
var amqp_types = require('rhea').types;
That works only with Qpid cpp, it's not working with ActiveMQ and Qpid java.
My server-side policy signing code is failing on this line:
credentialCondition = conditions[i]["x-amz-credential"];
(Note that this code is taken from the Node example authored by the FineUploader maintainer. I have only changed it by forcing it to use version 4 signing without checking for a version parameter.)
So it's looking for an x-amz-credential parameter in the request body, among the other conditions, but it isn't there. I checked the request in the dev tools and the conditions look like this:
0: {acl: "private"}
1: {bucket: "menu-translator"}
2: {Content-Type: "image/jpeg"}
3: {success_action_status: "200"}
4: {key: "4cb34913-f9dc-40db-aecc-a9fdf518a334.jpg"}
5: {x-amz-meta-qqfilename: "f86d03fb-1b62-4073-9458-17e1dfd8b3ae.jpg"}
As you can see, no credentials. Here is my client-side options code:
var uploader = new qq.s3.FineUploader({
debug: true,
element: document.getElementById('uploader'),
request: {
endpoint: 'menu-translator.s3.amazonaws.com',
accessKey: 'mykey'
},
signature: {
endpoint: '/s3signaturehandler'
},
iframeSupport: {
localBlankPagePath: '/views/blankForIE9Support.html'
},
cors: {
expected: true,
sendCredentials: true
},
uploadSuccess: {
endpoint: 'success.html'
}
});
What am I missing here?
I fixed this by altering my options code in one small way:
signature: {
endpoint: '/s3signaturehandler',
version: 4
},
I specified version: 4 in the signature section. Not that this is documented anywhere, but apparently the client-side code uses this as a flag for whether or not to send along the key information needed by the server.