How to allow clients to hook into preconfigured hooks in application - events

I have a pretty standard application with a frontend, a backend and some options in the frontend for modifying data. My backend fires events when data is modified (eg. record created, record updated, user logged in, etc.).
Now what I want to do is for my customers to be able to code their own functions and "hook" them into these events.
So far the approaches I have thought of are:
Allowing users in the frontend to write some code in a codeeditor like codemirror, but this whole storing code and executing it with some eval() seems kind of risky and unstable.
My second approach is illustrated below (to the best of my ability at least). The point is that the CRUD API calls a different "hook" web service that has these (recordUpdated, recordCreated, userLoggedIn,...) hook methods exposed. Then the client library needs to extend some predefined interfaces for the different hooks I expose. This still seems doable, but my issue is I can't figure out how my customers would deploy their library into the running "hook" service.
So it's kind of like webhooks, except I already know the exact hooks to be created which I figured could allow for an easier setup than customers having to create their own web services from scratch, but instead just create a library that is then deployed into an existing API (or something like that...). Preferably the infrastructure details should be hidden from the customers so they can focus solely on making business logic inside their custom hooks.
It's kind of hard to explain, but hopefully someone will get and can tell me if I'm on the right track or if there is a more standard way of doing hooks like these?
Currently the entire backend is written in C# but that is not a requirement.

I'll just draft out the main framework, then wait for your feedback to fill in anything unclear.
Disclaimer: I don't really have expertise with security and sandboxing. I just know it's an important thing, but really, it's beyond me. You go figure it out 😂
Suppose we're now in a safe sandbox where all malicious behaviors are magically taken care, let's write some Node.js code for that "hook engine".
How users deploy their plugin code.
Let's assume we use file-base deployment. The interface you need to implement is a PluginRegistry.
class PluginRegistry {
constructor() {
/**
The plugin registry holds records of plugin info:
type IPluginInfo = {
userId: string,
hash: string,
filePath: string,
module: null | object,
}
*/
this.records = new Map()
}
register(userId, info) {
this.records.set(userId, info)
}
query(userId) {
return this.records.get(userId)
}
}
// plugin registry should be a singleton in your app.
const pluginRegistrySingleton = new PluginRegistry()
// app opens a http endpoint
// that accepts plugin registration
// this is how you receive user provided code
server.listen(port, (req, res) => {
if (isPluginRegistration(req)) {
let { userId, fileBytes, hash } = processRequest(req)
let filePath = pluginDir + '/' + hash + '.js'
let pluginInfo = {
userId,
// you should use some kind of hash
// to uniquely identify plugin
hash,
filePath,
// "module" field is left empty
// it will be lazy-loaded when
// plugin code is actually needed
module: null,
}
let existingPluginInfo = pluginRegistrySingleton.query(userId)
if (existingPluginInfo.hash === hash) {
// already exist, skip
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('ok');
} else {
// plugin code written down somewhere
fs.writeFile(filePath, fileBytes, (err) => {
pluginRegistrySingleton.register(userId, pluginInfo)
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('ok');
})
}
}
})
From the perspective of hook engine, it simply opens a HTTP endpoint to accept plugin registration, agnostic to the source.
Whether it's from CI/CD pipeline, or plain web interface upload, it doesn't matter. If you have CI/CD setup for your user, it is just a dedicated build machine that runs bash scripts isn't it? So just fire a curl call to this endpoint to upload whatever you need. Same applies to web interface.
How we would execute plugin code
User plugin code is just normal Node.js module code. You might instruct them to expose certain API and conform to your protocol.
class HookEngine {
constructor(pluginRegistry) {
// dependency injection
this.pluginRegistry = pluginRegistry
}
// hook
oncreate(payload) {
// hook call payload should identify the user
const pluginInfo = this.pluginRegistry.query(payload.user.id)
// lazy-load the plugin module when needed
if (pluginInfo && !pluginInfo.module) {
pluginInfo.module = require(pluginInfo.filePath)
}
// user plugin module is just normal Node.js module
// you load it with `require`, and you call what ever function you need.
pluginInfo.module.oncreate(payload)
}
}

Related

How to retrieve the XPC service of a file provider extension on macOS?

I have extended my example project from my previous question with an attempt to establish an XPC connection.
In a different project we have successfully implemented the file provider for iOS. The exposed service must be resolved by URLs it is responsible for. On iOS it is the only possibility and on macOS it appears like that, too. Because on macOS the system takes care of managing files there are no URLs except the one which can be resolved through NSFileProviderItemIdentifier.rootContainer.
In the AppDelegate.didFinishLaunching() method I try to retrieve the service like this (see linked code for full reference, I do not want to unnecessarily bloat this question page for now):
let fileManager = FileManager.default
let fileProviderManager = NSFileProviderManager(for: domain)!
fileProviderManager.getUserVisibleURL(for: NSFileProviderItemIdentifier.rootContainer) { url, error in
// [...]
fileManager.getFileProviderServicesForItem(at: url) { list, error in
// list always contains 0 items!
}
}
The delivered list always is empty. However the extension is creating a service source on initialization which creates an NSXPCListener which has an NSXPCListenerDelegate that exports the NSFileProviderReplicatedExtension object on new connections. What am I missing?
func listener(_ listener: NSXPCListener, shouldAcceptNewConnection newConnection: NSXPCConnection) -> Bool {
os_log("XPC listener delegate should accept new connection...")
newConnection.exportedObject = fileProviderExtension
newConnection.exportedInterface = NSXPCInterface(with: SomeProviderServiceInterface.self)
newConnection.remoteObjectInterface = NSXPCInterface(with: SomeProductServiceInterface.self)
newConnection.resume()
return true
}
Suspicious: serviceName of the FileProviderServiceSource never is queried. We are out of ideas why this is not working.
There is a protocol which your extension's principal class can implement, NSFileProviderServicing.
https://developer.apple.com/documentation/fileprovider/nsfileproviderservicing

Using `createMockClient` for testing non react code?

I have mixed application that uses Apollo for both React and non-react code.
However, I can’t find documentation or code examples around testing non-react code with the apollo client,not using MockedProvider. I did, however, notice that apollo exports a mock client from the testing directory.
import { createMockClient } from '#apollo/client/testing';
I haven’t found any documentation about this API and am wondering if it’s intended to be used publicly and, if not, what the supported approach is for this.
The reason I need this is simple: When using Next.js’ SSR and/or SSG features data fetching and actual data rendering are split into separate functions.
So the fetching code is not using React, but Node.js to fetch data.
Therefore I use apolloClient.query to fetch the data I need.
When trying to wrap a react component around that fetching code in a test an wrap MockedProvider around that the apolloClient’s query method always returns undefined for mocked queries - so it seems this only works for the useQuery hook?
Do you have any idea how to mock the client in non-react code?
Thank you for your support in advance. If you need any further information from me feel free to ask.
Regards,
Horstcredible
I was in a similar position where I wanted to use a MockedProvider and mock the client class, rather than use useQuery as documented here: https://www.apollographql.com/docs/react/development-testing/testing/
Though it doesn't seem to be documented, createMockClient from '#apollo/client/testing' can be passed the same mocks as MockedProvider to mock without useQuery. These examples assume you have a MockedProvider:
export const mockGetAssetById = async (id: Number): Promise<any> => {
const client = createMockClient(mocks, GetAsset)
const data = await client.query({
query: GetAsset,
variables: id,
})
return data
}
Accomplishes the same as:
const { data } = useQuery(
GetAsset,
{ variables: { id } }
)

Workbox cache group is not correct

I'm using workbox-webpack-plugin v5 (the latest) with InjectManifest plugin. The following is my service worker source file:
import { CacheableResponsePlugin } from 'workbox-cacheable-response';
import { clientsClaim, setCacheNameDetails, skipWaiting } from 'workbox-core';
import { ExpirationPlugin } from 'workbox-expiration';
import {
cleanupOutdatedCaches,
createHandlerBoundToURL,
precacheAndRoute,
} from 'workbox-precaching';
import { NavigationRoute, registerRoute, setCatchHandler } from 'workbox-routing';
import { CacheFirst, NetworkOnly, StaleWhileRevalidate } from 'workbox-strategies';
setCacheNameDetails({
precache: 'install-time',
prefix: 'app-precache',
runtime: 'run-time',
suffix: 'v1',
});
cleanupOutdatedCaches();
clientsClaim();
skipWaiting();
precacheAndRoute(self.__WB_MANIFEST);
precacheAndRoute([{ url: '/app-shell.html', revision: 'html-cache-1' }], {
cleanUrls: false,
});
const handler = createHandlerBoundToURL('/app-shell.html');
const navigationRoute = new NavigationRoute(handler);
registerRoute(navigationRoute);
registerRoute(
/.*\.css/,
new CacheFirst({
cacheName: 'css-cache-v1',
})
);
registerRoute(
/^https:\/\/fonts\.(?:googleapis|gstatic)\.com/,
new CacheFirst({
cacheName: 'google-fonts-cache-v1',
plugins: [
new CacheableResponsePlugin({
statuses: [0, 200],
}),
new ExpirationPlugin({
maxAgeSeconds: 60 * 60 * 24 * 365,
maxEntries: 30,
}),
],
})
);
registerRoute(
/.*\.js/,
new StaleWhileRevalidate({
cacheName: 'js-cache-v1',
})
);
setCatchHandler(new NetworkOnly());
I have the following questions/problems:
Cache group is not correct. Everything except google fonts is under workbox-precache-v2 or app-precache-install-time-v1 cache group, not individual cache groups such as css-cache-v1, js-cache-v1. However, 1 in 20 times, it shows correct cache group, and I just can't figure out why.
Google font shows from memory cache. Is it correct? It works fine in offline, but what will happen if the user closes the browser/machine and comes back in offline mode?
Is '/app-shell.html' usage correct? It's an express backend app with * as the wild card for all routes, and React Router handles the routing. Functionally, it's working fine offline. I don't have any app-shell.html page.
Thanks for your help.
Cache group is not correct. Everything except google fonts is under workbox-precache-v2 or app-precache-install-time-v1 cache group, not individual cache groups such as css-cache-v1, js-cache-v1. However, 1 in 20 times, it shows correct cache group, and I just can't figure out why.
It depends on what's in your precache manifest (i.e. what self.__WB_MANIFEST gets replaced with during your webpack build).
For example, let's say you have a file generated by webpack called bundle.js, and that file ends up in your precache manifest. Based on your service worker code, that file will end up in a cached called app-precache-install-time-v1—even though it also matches your runtime route with the cache js-cache-v1.
The reason is because your precache route is registered before you runtime route, so your precaching logic will handle that request rather than your runtime caching logic.
Google font shows from memory cache. Is it correct? It works fine in offline, but what will happen if the user closes the browser/machine and comes back in offline mode?
I believe this means the request is not being handled by the service worker, but you can check the workbox logs in the developer console to verify (and see why not).
Alternatively, you could update your code to use a custom handler that just logs whether it's running like so:
registerRoute(
/^https:\/\/fonts\.(?:googleapis|gstatic)\.com/,
({request}) => {
// Log to make sure this function is being called...
console.log(request.url);
return fetch(request);
}
);
Is '/app-shell.html' usage correct? It's an express backend app with * as the wild card for all routes, and React Router handles the routing. Functionally, it's working fine offline. I don't have any app-shell.html page.
Does your express route respond with a file called app-shell.html? If so, then you'd probably want your precache revision to be a hash of the file itself (rather than revision: 'html-cache-1'). What you have right now should work, but the risk is you'll change the contents of app-shell.html, deploy a new version of your app, but your users will still see the old version because you forgot up update revision: 'html-cache-1'. In general it's best to use revisions generated as part of your build step.

Accessing aliases in Cypress with "this"

I'm trying to share values between my before and beforeEach hooks using aliases. It currently works if my value is a string but when the value is an object, the alias is only defined in the first test, every test after that this.user is undefined in my beforeEach hook. How can I share a value which is an object between tests?
This is my code:
before(function() {
const email = `test+${uuidv4()}#example.com`;
cy
.register(email)
.its("body.data.user")
.as("user");
});
beforeEach(function() {
console.log("this.user", this.user); // This is undefined in every test except the first
});
The alias is undefined in every test except the first because aliases are cleared down after each test.
Aliased variables are accessed via cy.get('#user') syntax. Some commands are inherently asynchronous, so using a wrapper to access the variable ensures it is resolved before being used.
See documentation Variables and Aliases and get.
There does not seem to be a way to explicitly preserve an alias, as the is with cookies
Cypress.Cookies.preserveOnce(names...)
but this recipe for preserving fixtures shows a way to preserve global variables by reinstating them in a beforeEach()
let city
let country
before(() => {
// load fixtures just once, need to store in
// closure variables because Mocha context is cleared
// before each test
cy.fixture('city').then((c) => {
city = c
})
cy.fixture('country').then((c) => {
country = c
})
})
beforeEach(() => {
// we can put data back into the empty Mocha context before each test
// by the time this callback executes, "before" hook has finished
cy.wrap(city).as('city')
cy.wrap(country).as('country')
})
If you want to access a global user value, you might try something like
let user;
before(function() {
const email = `test+${uuidv4()}#example.com`;
cy
.register(email)
.its("body.data.user")
.then(result => user = result);
});
beforeEach(function() {
console.log("global user", user);
cy.wrap(user).as('user'); // set as alias
});
it('first', () => {
cy.get('#user').then(val => {
console.log('first', val) // user alias is valid
})
})
it('second', () => {
cy.get('#user').then(val => {
console.log('second', val) // user alias is valid
})
})
Replace
console.log("global user", this.user);
with
cy.log(this.user);
and it should work as expected.
The reason for this is the asynchronous nature of cypress commands. Think of it as a two-step process: All the cypress commands are not doing what you think, when they run. They just build up a chain of commands. This chain is executed as the test later on.
This is obviously not the case for other commands like console.log(). This command is executed when preparing the test.
This is explained in great detail in the cypress documentation:
But I felt it very hard to get my head around this. You have to get used to it.
One rule of thumb: Almost every command in your test should be a cypress command.
So just use cy.log instead of console.log
If you must use console.log you can do it like this:
cy.visit("/).then(() => console.log(this.user))
this way the console.log is chained. Or if you do not have a subject to chain off, build your own custom command like this:
Cypress.Commands.add("console", (message) => console.log(message))
cy.console(this.user)
Another mistake with using this in cypress is using arrow functions. If you do, you don't have access to the this you are expecting. See Avoiding the use of this in the cypress docs.
TL;DR: If you want an aliased user object available in each of your tests, you must define it in a beforeEach hook not a before hook.
Cypress performs a lot of cleanup between tests and this includes clearing all aliases. According to the Sharing Contexts section of Variables and Aliases: "Aliases and properties are automatically cleaned up after each test." The result you are seeing (your alias is cleaned after the first test and subsequently undefined) is thus expected behavior.
I cannot determine what register does in the original post, but it seems your intention is to save the overhead of performing API calls repeatedly in a beforeEach hook. It is definitely easiest to put everything you want in the beforeEach hook and ignore the overhead (also, pure API calls with no UI interaction will not incur much penalty).
If you really need to avoid repetition, this should not be accomplished through regular variables due to potential timing problems with Cypress' custom chainables. This is an anti-pattern they publish. The best way to do this would be:
Create a fixture file with static user data that you will use to conduct the test. (Remove the uuidv4.)
For the set of tests that need your user data, call register in a before hook using the fixture data. This will create the data in the system under test.
Use a beforeEach hook to load the fixture data and alias it for each of your tests. Now, the static data you need is accessible with no API calls and it is guaranteed to be in the system properly thanks to the before hook.
Run your tests using the alias.
Clean up the data in an after hook (since your user no longer has a random email, you need to add this step).
If you need to do the above for the whole test suite, put your before and after hooks in the support file to make them global.

mock StripeCheckout in Jasmine test

I am writing Jasmine tests for my Javascript application. A major time-sink has been testing my code which depends on StripeCheckout. Stripe does not want you to use it offline. I realized I should mock the service, but that has not been easy in Jasmine.
How can I mock "custom" (instead of "simple") usage of StripeCheckout?
I tried to use spies, like so,
var StripeCheckout = jasmine.createSpyObj('StripeCheckout', ['configure']);
But I think the created object needs to attach to the global object (window).
So, I can add an object to the global
object. This worked,
but it feels lame.
Another option could be to tell
Karma to load the
page over the network. This worked for me, but it seems lame to make
a network request for tests.
I'm not sure if you have figured this out yet but an easy way to do this would be to just create a simple mocked implementation for StripeCheckout. Something like this should work.
beforeEach(function() {
module(function($provide) {
$provide.factory('StripeCheckout', function() {
//your mocked custom implementation goes here
configure: function() {
//do something
}
}
}
inject(function($injector) {
StripeCheckoutMock = $injector.get('StripeCheckout');
}
)
This way you are not making a request in order to run your tests. The test will just use the mocked service that you have set up.

Resources