Just started familiarizing myself with Hapi. Hapi uses plugins to add components to your application. I'm having a hard time understanding why i would use plugins when i could just do something like:
var lib = require('whatever lib from npm');
What are the differences between the two?
Hapi Plugins are also node modules, but they are node modules which were built according to the Hapi Plugin API (they expose a register method that register the plugin with your Hapi pack/server)
Plugins can automatically add routes to your server, change the request, payload and response, and in general can change how Hapi behaves.
So in short Plugins are Node modules written specifically to augment Hapi.
Lets look at two packages lout and Lo-Dash.
Lo-Dash module as you might know is high performence js tool set.
lout is a Hapi plugin that will add a /doc route to your app and.
you can find both on npm and lets start with lout -
var Hapi = require('hapi'),
lout = require('lout'),
server = new Hapi.Server(80);
server.pack.register({
plugin: lout
}, function() {
server.start();
}
);
As you can see All we need to do is register lout with our server pack and all its magic is available to us (some plugins will require more options)
now lets use lodash in our code
var Hapi = require('hapi'),
lout = require('lout'),
_ = require('lodash'),
preset = { app: { name: "myApp"}},
server;
if (process.env.DEBUG) {
_.extend(preset, {debug: {request: ['error']});
}
server = new Hapi.Server(80, preset);
_.extend(preset, { endpoint: '/lout'});
server.pack.register({
plugin: lout
}, function() {
server.start();
}
);
here we use lodash to extend our server setting and configure our server to log errors to the console if we set the DEBUG environment parameter when running the server.
note that lodash has no idea about our Hapi server and how it works it just uses as an helper and the programmer needs to know how to stitch them together.
Calling lodash with server.pack.register will have no meaning and would result in an error.
So this wont work -
server.pack.register({
plugin: require('lodash')
}, function() {
server.start();
}
);
Related
Currently getting the following error with MongoDB:
no saslprep library specified. Passwords will not be sanitized
We are using Webpack so simply installing the module doesn't work (Webpack just ignores it). I found this thread which talks about how to exclude it from Webpack compilations, but then I have to manually load it into every Lambda function which led me to Lambda Layers.
Following the Serverless guide on using Lambda layers allowed me to get my layer published to AWS and included in all of my functions, but for some reason, it doesn't install the modules. If I download the layer using the AWS GUI, I get a folder with just the package.json and package-lock.json files.
My file structure is:
my-project
|_ layers
|_ saslprep
|_ package.json
and my serverless.yml is:
layers:
saslprep:
path: layers/saslprep
compatibleRuntimes:
- nodejs14.x
This is not my preferred solution as I'd like to use 256, but the way I got around this error/warning was by changing the authMechanism from SCRAM-SHA-256 to SCRAM-SHA-1 in the connection string. The serverless-bundle most likely needs to add this dependency into their package to enable support for Mongo 4.0 SHA256 (my best guess!).
You can specify this authentication mechanism by setting the authMechanism parameter to the value SCRAM-SHA-1 in the connection string as shown in the following sample code.
const { MongoClient } = require("mongodb");
// Replace the following with values for your environment.
const username = encodeURIComponent("<username>");
const password = encodeURIComponent("<password>");
const clusterUrl = "<MongoDB cluster url>";
const authMechanism = "SCRAM-SHA-1";
// Replace the following with your MongoDB deployment's connection string.
const uri =
`mongodb+srv://${username}:${password}#${clusterUrl}/?authMechanism=${authMechanism}`;
// Create a new MongoClient
const client = new MongoClient(uri);
// Function to connect to the server
async function run() {
try {
// Connect the client to the server
await client.connect();
// Establish and verify connection
await client.db("admin").command({ ping: 1 });
console.log("Connected successfully to server");
} finally {
// Ensures that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);
I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services
Webpack has a resolve.mainFields configuration: https://webpack.js.org/configuration/resolve/#resolvemainfields
This allows control over what package.json field should be used as an entrypoint.
I have an app that pulls in dozens of different 3rd party packages. The use case is that I want to specify what field to use depending on the name of the package. Example:
For package foo use the main field in node_modules/foo/package.json
For package bar use the module field in node_modules/bar/package.json
Certain packages I'm relying on are not bundled in a correct manner, the code that the module field is pointing to does not follow these rules: https://github.com/dherman/defense-of-dot-js/blob/master/proposal.md This causes the app to break if I wholesale change the webpack configuration to:
resolve: {
mainFields: ['module']
}
The mainFields has to be set to main to currently get the app to work. This causes it to always pull in the CommonJS version of every dependency and miss out on treeshaking. Hoping to do something like this:
resolve: {
foo: {
mainFields: ['main']
},
bar: {
mainFields: ['module'],
}
Package foo gets bundled into my app via its main field and package bar gets bundled in via its module field. I realize the benefits of treeshaking with the bar package, and I don't break the app with foo package (has a module field that is not proper module syntax).
One way to achieve this would be instead of using resolve.mainFields you can make use of resolve.plugins option and write your own custom resolver see https://stackoverflow.com/a/29859165/6455628 because by using your custom resolver you can programmatically resolve different path for different modules
I am copy pasting the Ricardo Stuven's Answer here
Yes, it's possible. To avoid ambiguity and for easier implementation,
we'll use a prefix hash symbol as marker of your convention:
require("#./components/SettingsPanel");
Then add this to your configuration file (of course, you can refactor
it later):
var webpack = require('webpack');
var path = require('path');
var MyConventionResolver = {
apply: function(resolver) {
resolver.plugin('module', function(request, callback) {
if (request.request[0] === '#') {
var req = request.request.substr(1);
var obj = {
path: request.path,
request: req + '/' + path.basename(req) + '.js',
query: request.query,
directory: request.directory
};
this.doResolve(['file'], obj, callback);
}
else {
callback();
}
});
}
};
module.exports = {
resolve: {
plugins: [
MyConventionResolver
]
}
// ...
};
resolve.mainFields not work in my case, but resolve.aliasFields works.
More details in https://stackoverflow.com/a/71555568/7534433
spring boot version: 2.0.4.RELEASE
asset-pipeline version: 3.0.3
Hi
we're using this plugin, because we know it from our grails applications.
We liked it, because it has a simple configuration (for our requirements)
Now we're developing a spring boot application and we used this plugin too and we're (almost) happy with it.
But when we run the application in the development mode the assets don't have a digest like /assets/my-styles-b5d2d7380a49af2d7ca7943a9aa74f62s.css
How do i configure the plugin to create a digest for all our resources?
currently we're using this configuration:
assets {
minifyJs = true
minifyCss = true
enableSourceMaps = false
includes = ["application.js", "application.scss"]
}
And we're using thymeleaf for our templates:
<link th:href="#{/assets/application.css}" rel="stylesheet">
I found a solution...
when you use the asset-pipeline, you get a gradle task assetCompile.
when creating a .war file, you can add this gradle task and replace all the assets with the versioned files.
when you want to use the versioned files in your production mode you have to use this configuration (build.gradle)
assets {
minifyJs = true
minifyCss = true
skipNonDigests = true
packagePlugin = true
includes = ["application.js", "application.scss"]
}
...
war {
dependsOn 'assetCompile'
from( "${buildDir}/assets", {
into "/WEB-INF/classes/META-INF/assets"
})
baseName = '<your project>'
enabled = true
}
that's all.
When running the assetCompile task, a manifest.properties file is created. This file contains the mapping of the original filename and the versioned one.
This file is used by the application to find the correct resource, e.g. application.css=application-79a3c8a2f085ecefadgfca3cda6fe3d12.css
I created a plugin which enables the url replacement for assets with digest in the production mode:
Dependency
compile 'ch.itds.taglib:asset-pipeline-thymeleaf-taglib:1.0.0'
Configuration
#Configuration
public class ThymeleafConfig {
#Bean
public AssetDialect assetDialect() {
return new AssetDialect();
}
}
Usage
<html xmlns:asset="https://www.itds.ch/taglib/asset">
<script asset:src="#{/assets/main.js}"></script>
</html>
The asset:src="#{/assets/main.js}" will be replaced with src="/assets/main-DIGEST.js".
The replacement happens only if the developmentRuntime of the asset pipeline is disabled.
A little bit more details are available on my blog post: https://kobelnet.ch/Blog/2019/03/12/assetpipelinethymeleaftaglib
I am in the process of writing unit/behavioural tests using Mocha for a particular blockchain network use-case. Based on what I can see, these tests are not hitting the actual fabric, in other words, they seem to be running in some kind of a simulated environment. I don't get to see any of the transactions that took place as a part of the test. Can someone please tell me if it is somehow possible to capture the transactions that take place as part of the Mocha tests?
Initial portion of my code below:
describe('A Network', () => {
// In-memory card store for testing so cards are not persisted to the file system
const cardStore = require('composer-common').NetworkCardStoreManager.getCardStore( { type: 'composer-wallet-inmemory' } );
let adminConnection;
let businessNetworkConnection;
let businessNetworkDefinition;
let businessNetworkName;
let factory;
//let clock;
// Embedded connection used for local testing
const connectionProfile = {
name: 'hlfv1',
'x-type': 'hlfv1',
'version': '1.0.0'
};
before(async () => {
// Generate certificates for use with the embedded connection
const credentials = CertificateUtil.generate({ commonName: 'admin' });
// PeerAdmin identity used with the admin connection to deploy business networks
const deployerMetadata = {
version: 1,
userName: 'PeerAdmin',
roles: [ 'PeerAdmin', 'ChannelAdmin' ]
};
const deployerCard = new IdCard(deployerMetadata, connectionProfile);
console.log("line 63")
const deployerCardName = 'PeerAdmin';
deployerCard.setCredentials(credentials);
console.log("line 65")
// setup admin connection
adminConnection = new AdminConnection({ cardStore: cardStore });
console.log("line 69")
await adminConnection.importCard(deployerCardName, deployerCard);
console.log("line 70")
await adminConnection.connect(deployerCardName);
console.log("line 71")
});
Earlier, my connection profile was using the embedded mode, which I changed to hlfv1 after looking at the answer below. Now, I am getting the error: Error: the string "Failed to import identity. Error: Client.createUser parameter 'opts mspid' is required." was thrown, throw an Error :). This is coming from
await adminConnection.importCard(deployerCardName, deployerCard);. Can someone please tell me what needs to be changed. Any documentation/resource will be helpful.
Yes you can use a real Fabric. Which means you could interact with the created transactions using your test framework or indeed other means such as REST or Playground etc.
In Composer's own test setup, the option for testing against an hlfv1 Fabric environment is used in its setup (ie whether you want to use embedded, web or real Fabric) -> see https://github.com/hyperledger/composer/blob/master/packages/composer-tests-functional/systest/historian.js#L120
Setup is captured here
https://github.com/hyperledger/composer/blob/master/packages/composer-tests-functional/systest/testutil.js#L192
Example of setting up artifacts that you would need to setup to use a real Fabric here
https://github.com/hyperledger/composer/blob/master/packages/composer-tests-functional/systest/testutil.js#L247
Also see this blog for more guidelines -> https://medium.com/#mrsimonstone/debug-your-blockchain-business-network-using-hyperledger-composer-9bea20b49a74