Debugging Transaction Code - hyperledger-composer

I've been asking myself whether there is an easy way of debugging the JavaScript code of the transactions. JS already has mature debuggers, it is only a question of how to easily bind it to the code running in the container. Does anyone have a clue? -- Thx.

One of the easiest ways to debug your transaction code is to deploy your business network into an embedded fabric which basically means that your code is run as any other NodeJS app is and you can use the node debugger to step through your code or even simple console.log statements if that suffices.
To get an insight into how to achieve this, have a look at the code here: UPDATED LINK
https://github.com/hyperledger/composer-sample-networks/blob/master/packages/carauction-network/test/CarAuction.js#L31-L49
This is the beforeEach method of a unit test for the sample network and as you'll see, it deploys the network to the 'embedded' fabric.
The code then goes on to perform tests that include calling the submitTransaction API on the embedded businessNetworkConnection which then causes the Transaction script code to be eval'ed by the embedded fabric.
So it's all happening within a single Node app and is much easier to debug.
HTH

Related

Ethereum / Geth: How to execute view code without deploying a contract?

Ethereum/smart contracts enthusiasts,
I want to execute some code which does not modify the state variables on my own Geth ETH-Node without deploying a contract. Is that somehow possible?
My current thoughts:
I have debugged the geth a little bit. I found by executing a view function the StaticCall is executed from the evm class. It seems at this point I can also inject some bytecode of my own view functions, without deploying it. From my understanding, a view function does not edit some state variables it only reads and returns them. This would mean I can technically do that, without destroying the chain. But this way of changing the code seems to be a little bit oversized, is there a simpler way?
Thanks.
It is not possible. You need EVM to execute the code. The EVM is an entirely isolated and sandboxed runtime environment. There are some key elements that are required by the execution environment to execute the code.
Deploying a contract is creating an instance of a contract. Then you interact with the contract instance

Access filesystem in Karma-Jasmine

I have a bunch of HTML5 canvas tests in my library which I run via Karma and Jasmine. If I detect differences in my tests I show the canvas DOM elements with my generated image output and a diff on the browser page. But when I run my tests in Chrome Headless and/or on my CI environment I have no way of checking the test results in case they fail. And that's currently the issue I'm facing: when I run the tests with a UI, they are green, if I run them in Chrome Headless they fail but I have no chance of checking my visual output.
As solution I'd like to write my generated images to disk. On local tests I can then check this folder what happened an on CI I can publish the result images as artifacts. But here comes the point: Karma and Jasmine seem to have no proper mechanism in place for this task. Also I could not find any plugin tackling this challenge of properly accessing the local file-system from your tests.
A tricky aspect is also that I cannot use promise (async/await) operations at the place where I want to save files. I am within a Jasmine CustomMatcher which does not have promise support I could try rewriting my tests again to handle the error reporting to Jasmine.
My attempts so far:
I started with a custom Karma reporter listening to browser logs and potentially use this as channel to hand over the image bytes to Node for writing to disk. But this additional plugin registration messed with my overall Karma configuration. Karma and Rollup were not working anymore once I registered my custom reporter and I never found out if such large byte amounts can be even transferred via this channel.
Then I started with an API via karma-express-http-server where I "upload" the files to be saved. But I stopped half way as such a simple task seem to require again a bunch of libs and custom implementation to get a simple file upload running (karma-express-http-server, multer). Also I need to rely on synchronous Ajax calls here which is not really future proof.
The Native File System API heavily relies on Promises so I cannot use it.
There must be a simpler way of just writing a file as part of your tests to disk while when using Karma and Jasmine.

How do I verify my LoadRunner test scripts are actually being executed?

I've a requirement to load test a web application using loadRunner(Community edition : 12.53 ). Currently I've my test scripts recorded using loadrunner default test script recorder. I'm assuming that, the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario?
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB.
I've my test scripts written in C and also manual correlation is applied using web_reg_save_param method.
What might be the things that could go wrong in such a scenario?. Any help would be deeply appreciated.
the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario? - Yes this is the correct behaviour.
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB. - Something you might be missing in the correlations. this generally happens when some variable is not correlated properly or gets missed. Or something like timestamp that you might think is irrelevant but needs to be taken care of.

How do I implement CloudPort side logic? JScript or VBScript produce an ActiveX control fault (even when left dummy, with empty-body functions)

The team I am working with has bought a CloudPort license (from CrossCheck Networks) and we are currently facing the problem of not being able to implement any sort of logic in the service Mocks (to control response selection). It would be something as simple as:
if (requestCounter++ == 1)
then
response = $fn:Global(MyFirstXmlString)$ // <-- this is CloudP syntax for vrbls
else
response = $fn:Global(MySecondXmlString)$.
We did not find any sample for using the Dll Plugin and neither of the two JScript and VBScript Tasks are working (i.e., our client machine gets back not the desired MySecondXmlString response but instead a Fault with
<faultstring>
ActiveX control '0e59f1d5-1fbe-11d0-8ff2-00a0d10038bc' cannot be instantiated
because the current thread is not in a single-threaded apartment.
</faultstring>.
Believe it or not, the fault above is being obtained even if the J- or VB-Script task is left empty! It's hard for us to believe that all the logic functionality advertised in the CloudPort UI is fake and that nothing is able to help one implement the kind of logic described above.
Any help would be appreciated!
Thanks,
Pompi
PS: A bit more details here on why the kind of logic described above is needed: We use SoapSonar in our testing framework to fire requests to a BizTalk orchestration application. The CP mocks are needed to simulate the environment of that BT orchestration. We cannot control individual mocked responses via SSonar requests: the (for cloudport: incoming) client requests are made by Production code and cannot be altered or controled by our SSonar client). The only Tasks functionality that worked for us is a DB-table working as an offline channel b/w SSonar and CP (SSonar writes in it and CP reads from it). CloudPort's reading of, say, responseXml's, from DB works fine but we cannot find a way to implement further behavior controlling logic on the CP side. Therefore, this stackoverflow posting. And thx for having the patience to read this whole shananigan :).
Don't think you can control this from the script.
The threading model should be controlled by the host, which I suppose uses windows's "vbscript.dll" for the actual execution.
So if you cannot find any settings under the tool's options or in the help :), you should look in the registry keys for the threading options of that ActiveX or "vbscript.dll"
That is the "ThreadingModel" value and try to change the values (you will also have to search the net for those, don't know them by heart).
There are chances that some other application (antivirus?) has changes the path to the dll that the COM interface should actually point to (see http://social.technet.microsoft.com/Forums/en-US/ieitpropriorver/thread/ac10bd5f-6d91-4aac-857c-0ed5758088ec)
Hope it helps.

TDD Scenario: Looking for advice

I'm currently in an environment where we are parsing data off of the client's website. I want to use my tests to ensure that when the client changes their site, I know when we are no longer receiving the information.
My first approach was to do pure integration tests where my tests hit the client's site and assert that the data was found. However half way through and 500 tests in, the test run has become unbearable and in some cases started timing out. So I cleared out as many tests that I could without loosing the core protection they are providing and I'm down to 350 or so. I'm left with a fear to add more tests to only break all the tests. I also find myself not running the 5+ minute duration (some clients will be longer as this is based on speed of communication with their site) when I make changes anymore. I consider this a complete failure.
I've been putting a lot of thought into this and asking around the office, my thoughts for my next attempt at this is to pull down the client's pages and write tests against these embedded resources in my projects. This will give me my higher test coverage and allow me to go back to testing in isolation. However I would need to be notified when they make changes and then re-pull down the pages to test against. I don't think the clients will adhere to this.
A suggestion was made to me to augment this with a suite of 'random' integration tests that serve the same function as my failed tests (hit the clients site) but in a lot less number than before. I really don't like the idea of random testing, where the possibility of sometimes getting red lights and some times getting green lights with the same code. But this so far sounds like the best idea I've heard to still gain the awareness of when the client's site has changed and my code no longer finds the data.
Has anyone found themselves testing an environment like this? Any suggestions from the testing community for me?
When you say the big test has become unbearable, it suggests that you are running this test suite manually. You shouldn't have to. It should just be running constantly in the background, at whatever speed it takes to complete the suite - and then start over again (perhaps after a delay if there are associated costs). Only when something goes wrong should you get an alert.
If there is something about your tests that causes them to get slower as their number grows - find it and fix it. Tests should be independent of one another, so simply having more of them shouldn't cause individual tests to time out.
My recommendation would be to try to isolate as much as possible the part of code that deals with the uncertainty. This part should be an API that works as a service used by all the other code. This way you would be protecting most of your code against changes.
The stable parts of the code should be unit-tested. With that part being independent from the connection to client's site running the tests should be way quicker and it would also make those tests more reliable.
The part that has to deal with the changes on the client's websites can be reduced. This way you are not solving the problem but at least you're minimising it and centralising it in only one module of your code.
Suggesting to the clients to expose the data as a web service would be the best for you. But I guess that doesn't depend on you :P.
You should look at dividing your tests up, maybe into separate assemblies that can be run independently. I typically have a unit tests assembly and a slower running integration tests assembly.
My unit tests assembly is very fast (because the code is tested in isolation using mocks) and gets run very frequently as I develop. The integration tests are slower and I only run them when I finish a feature / check in or if I have a bad feeling about breaking something.
Maybe you could do something similar or even take the idea further and have 3 test suites with the third containing even slower client UI polling tests.
If you don't have a continuous integration server / process you should look at setting one up. This would continuously build you software and execute the tests. This could be set up to monitor check-ins and work in the background, sending out a notification if anything fails. With this in place you wouldn't care how long your client UI polling tests take because you wouldn't ever have to run them yourself.
Definitely split the tests out - separate unit tests from integration tests as a minimum.
As Martyn said, get a Continuous Integration system in place. I use Teamcity, which is excellent, easy to use, free for the first 20 builds, and you can happily run it on your own machine if you don't have a server at your disposal - http://www.jetbrains.com/teamcity/
Set up one build to run on every check in, and make that build run your unit tests, or fast-running tests if you will.
Set up a second build to run at midnight every night (or some other convenient time), and include in this the longer running client-calling integration tests. With this in place, it won't matter how long the tests take, and you'll get a big red flag first thing in the morning if your client has broken your stuff. You can also run these manually on demand, if you suspect there might be a problem.

Resources