Link solidity source code to deployed contract in forked network - debugging

I am debugging my smart contract using hardhat and local forked mainnet. My contract calls an underlying smart contract which is not deployed by me. When the underlying smart contract fails and throws an exception, the stacktrace only shows <UnrecognizedContract>.<unknown> (0x contract address). I have the source code of the underlying contract and I would like to link it to the deployed binaries so that I know where exactly the exception is thrown. Is there a way to achieve this in hardhat?
Hardhat is able to link my own contract's source code and provde clear stacktrace, so I assume there might be a way. But I did not find any clues in hardhat's docs.

Related

How do I integrate a Go binding with MetaMask?

I am still learning. I have built a web3/js based front end that deploys and interacts with a contract where others can contribute ETH (a kickstarter-type contract). Everything works fine, with MetaMask kicking in for any payable transaction. I then built a front/back end app using Go. I deployed the same Solidity contract and ran it through abigen to produce a Golang bindings file. I can interact with the contract bind file only by hard-coding my private key, or passing it through the front end.
What I can't find info on is how to integrate MetaMask with the bindings, so other folks can contribute to the contract. I'd very much appreciate knowing if this is even possible, and if so, could you point me in the right direction? I've got this far but I'm stumped!
Thanks very much.

how to find the resource code according to a contract address like using etherscan on eth?

how to find the resource code according to a contract address like using etherscan on eth?
for example:
this is my hash for staking near to a validator node:
https://explorer.near.org/transactions/3fmNUWvrTnbySNo3eycPnuT7Fn5mR8LzcUJkdX1Y5xJd
I was using the 'deposit_and_stake' method to realize that, but how to get the code of the method in the contract?
There is currently no way to match the deployed contract back to the source code. To do that we would need to have:
some registry of contract hashes mapping those to the source code links (could be a contract on chain)
tooling that would automatically push the data into the registry
the source code of the contract should be open-source and have reproducible builds setup (e.g. publish a Docker image with all the dependencies, so re-compilation of the same source code will produce a completely identical Wasm binary)
DevConsole will strike to provide better tooling for this scenario, but it is not there yet.

How to trouble shoot and resolve XrmToolBox Plugin Registration tool not connecting?

Seems like every other day XrmToolBox's Plugin Registration tool fails to connect. It's probably the most fickle tool I've ever used professionally (is this really the best tool for the job? yikes)
In years of working with it, I've not yet found a reliable way to get the tool to connect. Everything connects fine in the browser. But XrmToolBox randomly fails.
And I've never found or read online a reliable way to figure it out except restart your computer, throw salt over your shoulder, spin counter-clockwise once in your chair, try again later.
Anyone have a better way?
The server was unable to process the request due to an internal error.
For more information about the error, either turn on
IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute
or from the configuration behavior) on the server in
order to send the exception information back to the client, or turn on
tracing as per the Microsoft .NET Framework SDK documentation and
inspect the server trace logs.
That was the only error I got from XrmToolBox. But it led me to solving the problem. Followed this article to enable more detailed error log:
https://community.adxstudio.com/products/adxstudio-portals/documentation/developers-guide/knowledge-base/enable-detailed-errors-on-the-organization-service/
Tried again, and saw that indeed there was a meaningful error in the XrmToolBox logs.
TL;DR: Turn on better error logging in the on-premise CRM web.config! Then try again to get a more helpful error.

Is it possible to use Nodejs Crypto module functionality in Fabric Composer?

I would like to be able to use various functions from the nodejs crypto module in my Fabric Composer chaincode. Is that possible? I tried hashing a simple string within my chaincode but it didn't work - in Playground I received this error message: 'TypeError: Cannot read property 'apply' of null.' and using CLI to local HLF the transaction never completed. I tested my javascript hashing code separately and it works but not when I try to run it within chaincode.
At the moment you cannot use node modules in transaction processor functions- you're limited to base javascript. It's possible that nodejs support will come in the future, and making crypo APIs available in composer is another option being considered.
If you want to try something in the mean time, crypto-browserify might be worth investigating. I don't know if anyone has got that to work though, so please report back if you try it.

How do you protect your API key in your tests when you publish a gem with tests?

I am using rspec in my test and i would like to protect my API key it when i publish my gem on github.
What are the best practices to do that? should i use VCR and then remove my key from the git log?
Broadly speaking, here are three approaches I have used in the past in similar situations. Which you choose will depend on the details your particular situation.
Test user supplies API key
If your test suite requires, or at least prefers, actual API calls with an actual API key, you can have the caller of the tests supply the credentials when running the tests.
Two most common ways of doing this are:
File in project with well-known name which is not checked into version control. Include an example with fake credentials, which is checked into version control, along with instructions for users to supply their real credentials into the real file before calling test suite.
Read from environment variables. Include instructions for users to set appropriate environment variables before calling test suite.
Otherwise,
Mock out the API
This can be the VCR approach you described. This could also be patching the API call to return some fake results.
Test your domain-specific code separate from the API interaction
Assume the API and the API client behave how you expect. Then, factor out the parts of your code which construct the API input and process the API output. Test properties of your input generated. Test behavior of output processor with known or fake output.
Finally, a warning:
If you have ever committed your API key to version control, it will visible in the history. If you have ever pushed to a public hosting service, it will have been exposed to the Internet, most notably, it will have been exposed to specialized bots which scrape newly-pushed commits for sensitive credentials. If this is you, change your credentials now!
I can't find the original blog post at the moment, but there was at least one report of someone accidentally pushing their AWS credentials to GitHub. They subsequently woke up to a several thousand dollar bill.

Resources