Metamask is not able to execute normal transactions in private quorum blockchain - metamask

I am setting up a quorum network with two nodes from the scratch which are using the RAFT consensus mechanism. The two nodes are live, up and running. One is specified as minter and the other is verifier. I have deployed a smart contract into the private quorum chain network using the Truffle framework.
I was able to add the contract token and account info to Metamask and retrieve corresponding ETH balance and token balances.
However, when I tried sending normal Ether between two accounts using Metamask (by using the accounts connected to Quorum chain), the transaction fails with the following message
MetaMask - RPC Error: Error: [ethjs-rpc] rpc error with payload {"id":6378119053557,"jsonrpc":"2.0","params":["0xf88b17850af16b1600830f424094e8bd1fc300c3cd85bf033a13effe02226d22e76280a4c6ed8990000000000000000000000000000000000000000000000000000000000000000a82f4f5a04e31045bf76c16a594ee742be05909019bfe31b0c57cca66918b13898b3684fda023aa10926436f4eec79d2a08407a55ba35d1dad4d50d9029dc26d7d7629bfabf"],"method":"eth_sendRawTransaction"} [object Object]
I have been trying to debug this error for a while but no success whatsoever. Requesting experts for help on this!
Thanks!

This happen with me sometime.I tried a lot of things but easy one is:-
Go to:-
Setting > Advance >Reset and reset account.
Note:-Resetting your account will clear your transaction history. This will not change the balances in your accounts or require you to re-enter your seed phrase.

Related

How does MetaMask confirm transactions?

We are trying to add the eth json rpc methods to our custom blockchain so we can use Metamask.
We are able to import accounts and send transactions to the blockchain but can not seem to get them to confirm.
getTransactionReceipt is being sent to our blockchain and we are returning the required response but the transaction always stays as "pending" in metamask?
Is there any documentation on this flow? Can anyone here explain what we are doing wrong? How can we get the transactions to show as complete?
Metamask does not confirm any transaction. It is just a bridge to connect to the blockchain. It allows connectivity to the Ethereum blockchain through the infrastructure available at Infura. more here: Is there some relation between window.ethereum injected by metamask and web3.js? Can we use both?
Metamask connects you to the blockchain. Blockchain consists of blocks
of data and these blocks of data are stored on nodes. Without nodes,
you cannot access blockchain data.
If you have ever worked with ganache in development environment, to connect to ganache with your metamask, you enter information about ganache local blockchain and metemask connects you the local ganache blockchain but at the end ganache blockchain deploys and confirms transactions.
Most likely you have either something wrong with contract code or you are passing wrong parameters from the front end.

Microservice blocked user persist in db and verify in other micro services

Description of our project
We are following micro services architecture in our project with database per service. We are trying to introduce blacklist function to our services which means if some user blacklisted to enter system, they can't use any of our micro services. We have multiple entry/exit points to our micro services such as gateway service (this gateway service will be used by frontend team), websocket message receivers, multiple spring schedulers to process the user data.
Current solution
We persist the blacklist users in db and exposed as an endpoint and we can name this as access service. Persisting this blacklist users to the database will be done by support team by calling the access service blacklist create endpoint. So whenever we receive a request from frontend, in gateway we will use the access service to check if the current user is present in the blacklist db, if it's blacklisted then we will block further access. The same goes to whatever message received from schedulers or websocket notifications i.e for example for each call we check whether the user is blacklisted.
Problem statement
We have 2 websocket notification receivers, multiple schedulers which will run for every 5 minutes which intern wants to access the same blacklist access service. Because of this we are making too many calls to our access service and causing this to be a bottleneck.
How do we avoid this case?
There are several approaches to the blocklisting problem.
First you could have one service with a blocklist and for every incoming request for every service you would do an extra call to blocklist service. Clearly, this is a huge availability and scalability risk.
Second option is push based: the blocklist service keeps notifying all other services about blocklisted users. In that case, every service can make a local decision to process a request.
The third option is to bake expiration into user sessions. Every session has three elements: expiration, access token and refresh token. Before expired every service will accept requests with valid access tokens. If an access token is expired, then the client has to get a new one by contacting a token service. That service will read refresh token and check if the user is still active and if that's the case - a new access token will be issues.
The third option is the one widely used. Most(all?) cloud providers have shorted lived credentials for this specific goal - to make sure an access can be revoked after some time.
Short lived credentials vs a dedicated service is a well known trade-off; you could read more details about very similar problem here: https://en.wikipedia.org/wiki/Certificate_revocation_list

Can we replicate a HTTP SESSION idea in a MQTT architecture?

Roughly speaking a HTTP SESSION is a kind of secret that the server sends to the client (ex browser) after user's credentials is checked. This secret is passed trough all subsequents HTTP requests and will identify the user. This is important because HTTP are stateless - source.
Now I have a scenario where there is a communication between a machine and a MQTT broker inside the AWS-IoT core. The machine displays some screens. The first screen is the login and password.
The idea here is that after the first screen, IF the credentials are validated, the server should generate a "session" and we should send this "session" across the screen pages. The machine should send this "SESSION" in all subsequent messages and the server must to validate this string before any action. This is a request created by an electrical engineering team.
Me, in the software development side it seems that make no sense since all machines to be connected in the AWS IoT-Core broker (MQTT) must to use a certificate - with is the validation already.
Beside of that, the MQTT broker offers the SESSION persistence capabilities. I know that the SESSIONs (QoS 0/1) in the broker side are related to idea of confidence of delivery and reception of a message.
That being said is it possible to use session persistence in MQTT to behavior like a sessions in HTTP in order to identify users across screens in devices? If yes how?
No, HTTP Session concept is not in any way similar to the MQTT session. The only thing held in a MQTT clients session is the list of subscribed topics, a HTTP session can hold arbitrary data.
Also MQTT messages hold NO information about the user or even the client that published the message when it is delivered to the subscriber, the ONLY information present is the message payload and the topic it was published to.
While MQTTv5 adds the option to include more metadata, trying to add the concept of users sessions is like trying to make a square peg fit in round hole.
If you want to implement something as part of the message payload then that is entirely up to you, but it is nothing to do with the transport protocol.

How to integrate internal APIs (Not accessible outside office network) to slack slash commands

I am trying to use slash commands to my one of the slack channel. I tried to do a POC using git API and it worked fine.
I first created a slash command from this link :
https://api.slack.com/censored/slash-commands
Commnad: /poc
Request URL: http://jsonplaceholder.typicode.com/posts
This worked fine when I type /opc on slack chat box of my channel. It returns some data.
But when I change the Request URL to an internal API, which is accessible only from the office domain, I get error:
Darn – that slash command didn't work (error message: Failure when
receiving data from the peer). Manage the command at .
I believe, slack is not able to access my internal URL in case. Is that possible to see the slack logs?
Can anyone please help me here.
This can not work, since the request URL needs to be accessible from the public Internet in order to work with Slack.
In general most of Slack's interactive features (Slash commands, Interactive messages, Modals, Events API, ...) require your app to provide a public endpoint that can be called by Slack via HTTP.
In order to access internal APIs with Slack you will need some kind of gateway or tunnel through the firewall of your company that exposes the request URL to Slack. There are many ways how to do that and the solution needs to be designed according to the security policy of your company.
Here are a couple of suggestions:
VPN tunnel
One approach would be to run your script for the slash command on an internal webserver (one that has access to the internal API) use a VPN tunnel to expose that web server to the Internet, e.g. with a tool like ngrok.
DMZ
Another approach would be to run your app in the DMZ of your companies network and configure the firewall on both sides to allow access to Slack form the public Internet and your app to you your internal network.
Bridge
Another approach is to host and run that part of your app that interacts with Slack on the public Internet and the part that interacts with your internal network on your internal company network. Then add a secure connection that allows the public part to communicate with the part running on the internal company network.
If opening a connection into the internal network is not an option, there is another way that can allow communication with internal services by inverting the communication direction with a queue.
To do this, you need to deploy a public endpoint that accepts the requests from Slack and puts them onto a queue (e. g. AWS Lambda + SQS, Flask + RabbitMQ) and then poll the queue from the internal network. The polling needs to happen fairly often (at least once a second) to ensure the communication is quick enough for the users not to notice the lag too much. By doing this you can avoid exposing any endpoint of the internal network.
The drawbacks of this approach are more infrastructure complexity and slower response times, but it can be a good alternative in some corporate environments.

How to implement authentication using remote actors?

I'm working on a card game and it seems like actors - specifically remote actors - would be a good fit. I'm having trouble figuring out how to implement the notion of logging in using remote actors. If a player starts up a fat client and enters a username and password, what should happen next? Should the client:
have a User remote actor where some state changes to represent a successful login?
call a method on an Authentication remote actor and get back a handle to a logged in User remote actor?
something else entirely?
I'm also wondering how this would fit in with reconnecting after a network issue.
Send an authentication message to a known remote actor, he responds with an actor you can talk to if successful, and a failure message if login failed. Profit

Resources