I'm new to cyclejs and I'm looking for websocket support and I don't see any (apart from the read only websocket driver from the docs and some 0.1.2 node side npm package).
Am I supposed to create my own driver or am I missing something?
Thanks in advance
Does this page help you?
https://cycle.js.org/drivers.html
Specifically the example code mentioned:
function WSDriver(/* no sinks */) {
return xs.create({
start: listener => {
this.connection = new WebSocket('ws://localhost:4000');
connection.onerror = (err) => {
listener.error(err)
}
connection.onmessage = (msg) => {
listener.next(msg)
}
},
stop: () => {
this.connection.close();
},
});
}
If you add a sink this should be a write and read driver. From their documentation:
Most drivers, like the DOM Driver, take sinks (to describe a write) and return sources (to catch reads). However, we might have valid cases for write-only drivers and read-only drivers.
For instance, the one-liner log driver we just saw above is a write-only driver. Notice how it is a function that does not return any stream, it simply consumes the sink msg$ it receives.
Other drivers only create source streams that emit events to the main(), but don’t take in any sink from main(). An example of such would be a read-only Web Socket driver, drafted below:
Related
I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services
I am in the process of writing unit/behavioural tests using Mocha for a particular blockchain network use-case. Based on what I can see, these tests are not hitting the actual fabric, in other words, they seem to be running in some kind of a simulated environment. I don't get to see any of the transactions that took place as a part of the test. Can someone please tell me if it is somehow possible to capture the transactions that take place as part of the Mocha tests?
Initial portion of my code below:
describe('A Network', () => {
// In-memory card store for testing so cards are not persisted to the file system
const cardStore = require('composer-common').NetworkCardStoreManager.getCardStore( { type: 'composer-wallet-inmemory' } );
let adminConnection;
let businessNetworkConnection;
let businessNetworkDefinition;
let businessNetworkName;
let factory;
//let clock;
// Embedded connection used for local testing
const connectionProfile = {
name: 'hlfv1',
'x-type': 'hlfv1',
'version': '1.0.0'
};
before(async () => {
// Generate certificates for use with the embedded connection
const credentials = CertificateUtil.generate({ commonName: 'admin' });
// PeerAdmin identity used with the admin connection to deploy business networks
const deployerMetadata = {
version: 1,
userName: 'PeerAdmin',
roles: [ 'PeerAdmin', 'ChannelAdmin' ]
};
const deployerCard = new IdCard(deployerMetadata, connectionProfile);
console.log("line 63")
const deployerCardName = 'PeerAdmin';
deployerCard.setCredentials(credentials);
console.log("line 65")
// setup admin connection
adminConnection = new AdminConnection({ cardStore: cardStore });
console.log("line 69")
await adminConnection.importCard(deployerCardName, deployerCard);
console.log("line 70")
await adminConnection.connect(deployerCardName);
console.log("line 71")
});
Earlier, my connection profile was using the embedded mode, which I changed to hlfv1 after looking at the answer below. Now, I am getting the error: Error: the string "Failed to import identity. Error: Client.createUser parameter 'opts mspid' is required." was thrown, throw an Error :). This is coming from
await adminConnection.importCard(deployerCardName, deployerCard);. Can someone please tell me what needs to be changed. Any documentation/resource will be helpful.
Yes you can use a real Fabric. Which means you could interact with the created transactions using your test framework or indeed other means such as REST or Playground etc.
In Composer's own test setup, the option for testing against an hlfv1 Fabric environment is used in its setup (ie whether you want to use embedded, web or real Fabric) -> see https://github.com/hyperledger/composer/blob/master/packages/composer-tests-functional/systest/historian.js#L120
Setup is captured here
https://github.com/hyperledger/composer/blob/master/packages/composer-tests-functional/systest/testutil.js#L192
Example of setting up artifacts that you would need to setup to use a real Fabric here
https://github.com/hyperledger/composer/blob/master/packages/composer-tests-functional/systest/testutil.js#L247
Also see this blog for more guidelines -> https://medium.com/#mrsimonstone/debug-your-blockchain-business-network-using-hyperledger-composer-9bea20b49a74
I need to write a tftp client implementation to send a file from a windows phone 8.1 to a piece of hardware.
Because I need to be able to support windows 8.1 I need to use the Windows.Networking.Sockets classes.
I'm able to send my Write request package but I am having troubles to receive the ack package (wireshark). This ack package is sent to an "ephemeral port" according to the TFTP specification but the port is blocked according to wireshark.
I know how to use sockets on a specific port but I don't know how to be able to receive ack packages send to different (ephemeral) ports. I need to use the port used for that ack package to continue the TFTP communication.
How would I be able to receive the ACK packages and continue to work on a different port? Do I need to bind the socket to multiple ports? I've been trying to find answers on the microsoft docs and google but other implementations gave me no luck so far.
As reference my current implementation:
try {
hostName = new Windows.Networking.HostName(currentIP);
} catch (error) {
WinJS.log && WinJS.log("Error: Invalid host name.", "sample", "error");
return;
}
socketsSample.clientSocket = new Windows.Networking.Sockets.DatagramSocket();
socketsSample.clientSocket.addEventListener("messagereceived", onMessageReceived);
socketsSample.clientSocket.bindEndpointAsync(new Windows.Networking.HostName(hostName), currentPort);
WinJS.log && WinJS.log("Client: connection started.", "sample", "status");
socketsSample.clientSocket.connectAsync(hostName, serviceName).done(function () {
WinJS.log && WinJS.log("Client: connection completed.", "sample", "status");
socketsSample.connected = true;
var remoteFile = "test.txt";
var tftpMode = Modes.Octet;
var sndBuffer = createRequestPacket(Opcodes.Write, remoteFile, tftpMode);
if (!socketsSample.clientDataWriter) {
socketsSample.clientDataWriter =
new Windows.Storage.Streams.DataWriter(socketsSample.clientSocket.outputStream);
}
var writer = socketsSample.clientDataWriter;
var reader;
var stream;
writer.writeBytes(sndBuffer);
// The call to store async sends the actual contents of the writer
// to the backing stream.
writer.storeAsync().then(function () {
// For the in-memory stream implementation we are using, the flushAsync call
// is superfluous, but other types of streams may require it.
return writer.flushAsync();
});
}, onError);
Finally found the issue.
Instead of connectAsynch I used getOutputStreamAsynch and now it receives messages on the client socket:
Some code:
tftpSocket.clientSocket.getOutputStreamAsync(new Windows.Networking.HostName(self.hostName), tftpSocket.serviceNameConnect).then(function (stream) {
console.log("Client: connection completed.", "sample", "status");
var writer = new Windows.Storage.Streams.DataWriter(stream); //use the stream that was created when calling getOutputStreamAsync
tftpSocket.clientDataWriter = writer; //keep the writer in case we need to close sockets we also close the writer
writer.writeBytes(sndBytes);
// The call to store async sends the actual contents of the writer
// to the backing stream.
writer.storeAsync().then(function () {
// For the in-memory stream implementation we are using, the flushAsync call
// is superfluous, but other types of streams may require it.
return writer.flushAsync();
});
}, self.onError);
My Parse app has a GiftCode collection which disallows the find operation at the class-level.
I am writing a beforeSave cloud function that prevents duplicate codes from being entered by our team from Parse's dashboard:
Parse.Cloud.beforeSave('GiftCode', function (req, res) {
Parse.Cloud.useMasterKey();
const code = req.object.get('code');
if (!code) {
res.success();
} else {
const finalCode = code.toUpperCase().trim();
req.object.set('code', finalCode);
(new Parse.Query('GiftCode'))
.equalTo('code', finalCode)
.first()
.then((gift) => {
if (!gift) {
res.success();
} else {
res.error(`GiftCode with code=${finalCode} already exists (objectId=${gift.id})`);
}
}, (err) => {
console.error(err);
res.error(err);
});
}
});
As you can see, I am calling Parse.Cloud.useMasterKey() (and this is running in the Parse cloud), but I am still getting the following error:
This user is not allowed to perform the find operation on GiftCode.
I use useMasterKey() in other normal cloud functions and am able to perform find operations as needed.
Is useMasterKey() not applicable to beforeSave functions?
I've never tried to use the master key in a beforeSave function but I wouldn't be surprised if there's some extra safeguards in place to prevent it. From a security standpoint, it seems like it could make all write-based CLPs and ACLs worthless for that class.
Try selectively using the master key by passing it as an option to the query like so
(new Parse.Query('GiftCode'))
.equalTo('code', finalCode)
.first({ useMasterKey: true })
.then((gift) => {
...
Parse.Cloud.useMasterKey(); has been deprecated in Parse Server version 2.3.0 (Dec 7, 2016). From that version on, it is a no-op (it does nothing). You should now insert the {useMasterKey:true} optional parameter to each of the methods that need to override the ACL or CLP in your code.
request.post({url: 'http://service.com/upload', form: {data: fs.createReadStream('T:/a.png')}});
After running this every second for ~10 minutes I get this error Error: EMFILE (too many open files)
Why? Am I supposed to be manually closing these streams? Why wouldn't request do that, but if so, how do I close them?
EDIT: I'm not making a lot of async calls, only 1 request is active at a time.
EDIT: No streams are ever being closed, it errors out exactly after I create 2046 streams. Why aren't they being closed?
Look, it depends on what kind of service you are building.
If, for an example, you are executing that fs.createReadStream many times async (like thousands of times in parallel), Node.JS will throw that error.
Try to debug your application like this (checking the number of opened streams).
var fs = require("fs");
var streams = 0;
var i;
try {
for (i=0;i<10000;i++)
{
streams++;
fs.createReadStream('./file.png').once('end',function () { streams--; });
}
} catch (e) {
console.log(streams);
}