How to find out how to interract with a smart contract on NEAR? Where to see its source or byte code? - nearprotocol

Having checked 2 main NEAR explorers, the official one and nearscan, I haven't found neither any way to see the byte code and source code of smart contracts on them, nor ability to interract with a smart contract, nor check what functions a smart contract has. On any Ethereum Explorer, and on a few others, all of this is possible.
Or perhaps I've missed something?
Is it correct that on NEAR explorers neither of the abovementioned features exist?

very good question.
You can use "Stats Gallery", this tool let you see Stats from an account (Or contract - which is an account with a contract deployed).
But let you interact with the Contract too.
https://stats.gallery/mainnet/nit.globaledu.near/contract?t=week
This is an example, you can see the functions of my "Feedback Contract".
Instead of this you can add a Testnet contract and try to call some functions, maybe the next one:
https://stats.gallery/testnet/alpha.neatar.testnet/contract?t=week
And the source code of the tool is here, just in case you want to know exactly how is does:
https://github.com/NEARFoundation/stats.gallery/blob/90c76d452bdb8ce12a7b1a4675df453f437494cc/src/composables/contract/useContract.ts
Saludos

Related

Is there a publicly available list of known good GS1-128 codes for testing?

I just finished writing a function to interpret GS1-128 codes (the data, not the barcode/datamatrix image) in (hopefully) any constellation they can come in.
I am now trying to thoroughly test the function.
All manually generated codes I have tried are working fine (first try, obviously), but generating error free GS1-128 codes by hand is a rather slow process and also flawed methodology since my understanding of creating a norm-conform code and my functions logic are obviously the same, but not nessccessarily correct.
I have already inquired at the local GS1 organization whether they have a list of known-good codes for testing they are allowed to hand out. The answer was no.
I have also searched on the internet for either a list or an automated means of generating codes (preferably with varried contents and orders od content) in bulk, neither yielded a helpful result.
I don't really know of any better place to ask this quiestion, so I'm asking this community, since I have the hope that someone might have one either lying around or has means of generating a reasonable (or unreasonable) amount of codes (the data, not the barcode/datamatrix image (although that would work just as well)) without too much effort.
If it's available I'd gladly take a list with decoded contents per code to further automate and scale up the testing, but I'm really not going to turn anything down :P
Hope whoever read this far is having a good day.
I would suggest that you take a look at the unit tests in the GS1 Barcode Syntax Engine, which comprehensively processes GS1 AI syntax messages:
https://github.com/gs1/gs1-syntax-engine/blob/main/src/c-lib/ai.c#L795
It's a shame that your GS1 Member Organisation were unable to refer you to their own tool.

How do you estimate the gas usage of a NEAR smart contract method call?

Is there a tool that can estimate how much gas a contract call will make before submitting to the NEAR network?
Currently the best estimation is to use runtime-standalone, which can process transactions without having to worry about consensus/networking. This means you can create accounts, deploy contracts, and invoke them and the outcome returned includes how much gas was burnt and used. The difference being burnt gas is used to execute the function call and used gas is how much was used by contract promise calls.
However, it's currently a MVP prototype and has only been used to test our core contract, here is it being used to test the lockup contract.
If your contract method doesn't invoke any batch promises and only normal promises,the mock runtime in near-sdk-as provides a way to create accounts and "deploy" contracts. It does this by internally using the binary of near-vm-runner-standalone, which is a rust crate. The binary provides a CLI to invoke a single transaction, which takes as input the current state of the contract being called, the contract's binary, the config file that defines the current context (who is calling the contract, how much gas is prepaid, etc), and a config for the cost of different fees. It then returns the updated state, the outcome of the transaction (e.g. how much gas was used and any receipts of transactions queued by promise calls).
The near-vm-runner-standalone is also published to npm with the package name: near-vm, which is what the mock runtime uses.
This is still an active area of development and we hope to turn runtime standalone into a useful easy to use tool for testing and gas estimation.
The easiest way to do it is to submit sample transaction with more than needed gas attached and then check in explorer how much gas was used, e.g. see
https://explorer.testnet.near.org/transactions/23dgV15pydiVhirWJ4He7TMoyRJM2DUXtcWb7VXFSy2G
300 Tgas was attached and 47 Tgas used for that given transaction.

Event model or schema in event store

Events in an event store (event sourcing) are most often persisted in a serialized format with versions to represent a changed in the model or schema for an event type. I haven't been able to find good documentation showing the actual model or schema for an actual event (often data table in event store schema if using a RDBMS) but understand that ideally it should be generic.
What are the most basic fields/properties that should exist in an event?
I've contemplated using json-api as a specification for my events but perhaps that's too "heavy". The benefits I see are flexibility and maturity.
Am I heading down the "wrong path"?
Any well defined examples would be greatly appreciated.
I've contemplated using json-api as a specification for my events but perhaps that's too "heavy". The benefits I see are flexibility and maturity.
Am I heading down the "wrong path"?
Don't overlook forward and backward compatibility.
You should plan to review Greg Young's book on event versioning; it doesn't directly answer your question, but it does cover a lot about the basics of interpreting an event.
Short answer: pretty much everything is optional, because you need to be able to change it later.
You should also review Hohpe's Enterprise Integration Patterns, in particular his work on messaging, which details a lot of cases you may care about.
de Graauw's Nobody Needs Reliable Messaging helped me to understan an important point.
To summarize: if reliability is important on the business level, do it on the business level.
So while there are some interesting bits of meta data tracking that you may want to do, the domain model is really only going to look at the data; and that is going to tend to be specific to your domain.
You also have the fun that the representation of events that you use in the service that produces them may not match the representation that it shares with other services, and in particular may not be the same message that gets broadcast.
I worked through an exercise trying to figure out what the minimum amount of information necessary for a subscriber to look at an event to understand if it cares. My answers were an id (have I seen this specific event before?), a token that tells you the semantic meaning of the message (is that something I care about?), and a location (URI) to get a richer representation if it is something I care about.
But outside of the domain -- for example, when you are looking at the system as a whole trying to figure out what is going on, having correlation identifiers and causation identifiers, time stamps, signatures of the source location, and so on stored in a consistent location in the meta data can be a big help.
Just modelling with basic types that map to Json to write as you would for an API can go a long way.
You can spend a lot of time generating overly complex models if you throw too much tooling at it - things like Apache Thrift and/or Protocol Buffers (or derived things) will provide all sorts of IDL mechanisms for you to generate incidental complexity with.
In .NET land and many other platforms, if you namespace the types you can do various projections from the types
Personally, I've used records and DUs in F# as a design and representation tool
you get intellisense, syntax hilighting, and types you can use from F# or C# for free
if someone wants to look, types.fs has all they need

How do I reverse-engineer the "import file" feature of an abandoned pascal application?

first question I've asked and I'm not sure how to ask it clearly, or if there will be an answer that I want to hear ;)
tl;dr: "I want to import a file into my application at work but I don't know the input format. How can I discover it?"
Forgive any pending wordiness and/or redaction.
In my work I depend on an unsupported (and proprietary) application written in Pascal. I have no experience with pascal (yet...) and naturally have no source code access. It is an excellent (and very secret/NDA sort of deal I think) application that allows us to deal with inventory and financial issues in my employer's organization. It is quite feature-comprehensive, reasonably stable and robust, and kind of foistered (word?) on us by a higher power.
One excellent feature that it has is the ability to load up "schedules" into our corporate system. This feature should be saving us hundreds of hours in data entry.
But it isn't.
The problem is, the schedules we receive are written in a legacy format intended for human eyes. The "new" system can't interpret them.
Our current information (which I have to read and then re-enter into the database by hand) is send in a sort of rich-text flat-file format, which would be easy to parse with the string library of probably any mainstream language.
So I want to write a converter to convert our data into a format that the new software can interpret.
By feeding certain assorted files into the system, I have learned a little bit about what kind of file it expects:
I "import" a zero-byte file. Nothing happens (same as printing a report with no data)
I "import" an XML file that I guess might look like the system expects. It responds with an exception dialog and a stacktrace. Apparently the string <?xml contains illegal characters or something
I "import" a jpeg image -- similar result to #2.
So I think that my target wants a flat-file itself. The file would need to contain a "document number" along with {entries with "incident IDs" and descriptions and numeric values}.
But I don't know this for certain.
Nobody is able to tell me exactly what these files should look like. Someone in the know said that they have seen the feature demonstrated -- somewhere out there is a utility that creates my importable schedules. But for now, the utility is lost and I am on my own.
What methods can I use to figure out the input file format? I know nothing about debugging pascal, but I assume that that is probably my best bet. Or do I have to keep on with brute force until I can afford a million monkey-operated typewriters? Do I have to decompile the target application? I don't know if I can get away with that, let alone read the decompiled source.
My google-fu has failed me.
Has anyone done something like this before or could they point me in the right direction? Are there any guides on this subject?
Thanks in advance.
PS: I am certain that I am not breaking any laws at this point, although I will have to check to find out if decompilation would get me into trouble or not, and that might be outside of my technical competence anyway.
If you have an example file you can try to take a hexdump utility and try to see if there things you can identify. Any additional info that you have (what should in the file) helps with that. Better even, if you know a program that can edit the file, you can use the editor to make minimal changes and then compare the file before and after.
IOW standard tricks of binary file format reverse engineering.
...If you have no existing files whatsoever, then reverse engineering the binary is your only option, and that is not pretty. Decompilation of native binaries is a black art that requires considerable time and skill. Read the various decompilation FAQs on the net.
First and for all, I would try to contact the authors of the program. Source code are options 1,2,3 and you only go with other options if there is really, really, really no hope whatsoever of obtaining source or getting normal support.

How to properly use Golang packages in the standard library or third-party with Goroutines?

Hi Golang programmers,
First of all I apologize if my question is not very clear initially but I'm trying to understand the proper usage pattern when writing Golang code that uses Goroutines when using the standard lib or other libraries.
Let me elaborate: Suppose I import some package that I didn't have a hand in writing that I want to utilize. Let's say this package does a simple http get request somehow to a website such as Flickr for example. If I want a concurrent request, I can just prefix the function call with the go keyword. But how do I know, that this package when doing the request doesn't already do some internal go calls itself therefore making my go calls redundant?
Do Golang packages typically say in the documentation that their method is "greened"? Or perhaps they provide two versions of a method, one that is green and one that is straight synchronous?
In my quest to understand Go idioms and usage patterns I feel like when using even packages in the standard lib that I can't be sure if my go commands are necessary. I suppose I can profile the calls, or write test code but that feels odd to have to figure out if a func is already "green".
I suppose another possibility is that it's up to me to study the source code of whatever I'm using and understand how it should be used and if the go keyword is necessary.
If anybody can shed some light on this or point me to the right documentation or even a Golang screen-cast I'd much appreciate it. I think Rob Pike briefly mentions in one talk that a good client api written go is just written in a typical synchronous manner and it's up to the caller of that api to have the choice of making it green or not.
Thanks for your time,
-Ralph
If a function / method returns some value(s), or have a side effect like that (io.Reader.Read) - then it's necessarily a synchronous thing. Unless documented otherwise, no safety for concurrent use by multiple goroutines should be assumed.
If it accepts a closure (callback) or a channel or if it returns a channel - then it is often an asynchronous thing. If that's the case, it's normally either obvious or explicitly documented. Asynchronous stuff like this is usually safe for concurrent use by multiple goroutines.

Resources