interaction with smart contract oracle - oracle

Is it possible for a smart contract to make calls to the Oracle in loop (as long as a condition is met) in a dapp? For example: a loop which makes calls to the oracle which returns the result on a callback.
Or does the backand have to make one call at a time?

Because all transactions are executed synchronously, it is not possible for oracle state to change during the loop.
You cannot sleep() in a smart contract, so your question does not make any sense in the first place.

Related

How to create a smart contract which will execute everyday at the same hour?

The title is pretty explanatory. I want a batch which will execute everyday, at the same hour. Like a batch.
Also, is it possible to have a smart contract with endpoints and batch executing inside?
Currently there is no way to schedule a smart contract execution inside a smart contract.
Meaning if you want to call a function in the smart contract you have to send a transaction to the smart contract. To achieve this at the same time everyday you would have to use something like a cronjob on a traditional backend.
Not sure what you mean with endpoints and batch executing, but you can also make read only functions using the #[view] makro, instead of the #[endpoint] makro you would normally use for public functions. These view functions can be called without requiring a transaction using the query endpoint.
And of course you can use loops inside your smart contract to execute a bunch of things at the same time.

How to prevent possible RC

Im using Tarantool 1.5 and lua procedures.
Documentation says a lua procedure can yield execution to another after network/io operation for example, a box.update call.
My main question is: if I get return tuple from box.update does it contain information "after update, before yield" or "after update, after yield" ?
Also, what is the best-practices to prevent possible race conditions?
If you need to do something like a transaction in 1.5, you may do either idempotent operation or do re-select and checks after any yield operation (update/delete/replace)

Increment field value in different records avoiding race condition in Golang

My Golang code gets different records from the database using goroutines, and increments the value in a determinated field in the record.
I can avoid the race condition If I use Mutex or Channels, but I have a bottleneck because every access to the database waits until the previous access is done.
I think I should do something like one Mutex for every different record, instead one only Mutex for all.
How could I do it?
Thanks for any reply.
In the comments you said you are using Couchbase. If the record you wish to update consists of only an integer, you can use the built in atomic increment functionality with Bucket.Incr.
If the value is part of a larger document, you can use the database's "Check and Set" functionality. In essence, you want to create a loop that does the following:
Retrieve the record to be updated along with its CAS value using Bucket.Gets
Modify the document returned by (1) as needed.
Store the modified document using Bucket.Cas, passing the CAS value retrieved in (1).
If (4) succeeds, break out of the loop. Otherwise, start again at (1).
Note that it is important to retrieve the document fresh each time in the loop. If the update fails due to an incorrect CAS value, it means the document was updated between the read and the write.
If the database has a way of handling that (i.e. an atomic counter built in) you should use that.
If not, or if you want to do this in go, you can use buffered channels. Inserting to a buffered channel is not blocking unless the buffer is full.
then, to handle the increments one at a time, in a goroutine you could have something like
for{
select{
value, _ := <-CounterChan
incrementCounterBy(value)
}
}

Oracle warehouse builder (owb) evaluate stored procedure results in process flow

I have a process flow built by someone prior which calls a very simple stored procedure. upon completion of the procedure the process flow has 2 transitions, one if the stored procedure was successful and the other if not. However, the stored procedure itself does not return anything that can be directly evaluated by the process flow like a return result. Now this procedure if it fails (with the ubiquitious max extants problem) it will call the branch which will call a stored procedure for sending a failure email message. If it succeeds the contrary will occur.
I had to tweak the procedure so I created a new one. now if it fails or succeeds the success branch is called regardless. I have checked all the docs from oracle as to how to make this work and for the life of me cannot determine how to make it work correctly. I first posted this on the oracle forum and got no responses. Does anyone have an idea how to make this work?
According to the Oracle Warehouse Builder guide:
When you add a transition to the canvas, by default, the transition has no condition applied to it.
Make sure that you have correctly defined a conditional transition as described in the Defining Transition Conditions section of the documentation.
A User Defined Activity will return an ERROR outcome if:
it raises an exception, or
it returns the value 3 and the Use Return as Status option is set to true
"However, the stored procedure itself does not return anything that
can be directly evaluated by the process flow like a return result."
This is the crux: if the operating procedure procedure produces no signal how can you tell whether it was successful? Indeed, what is the definition of success under this circumstance?
I don't understand why, when you had to "tweak the procedure", you wrote a new one instead of, er, tweaking the original procedure. The only way you're going to solve this is to get some feedback out of the original procedure.
At this point we run out of details. The direct option is to edit the original procedure so it passes back result info, probably through OUT parameters or else by introducing some logging capability. Alternatively, re-write it to raise an exception on failure. The indirection option is to write some queries to establish what the procedure achieved on a given run, and then figure out whether that constitutes success.
Personally, re-writing the original procedure seems the better option.
If this answer doesn't help you then you need to explain more about your scenario. What your procedure does, how you need to evaluate it, why you can't re-write it.

I Need an Analogy: Triggers and Events

For another question, I'm running into a misconception that seems to arise here at SO occasionally. Some questioners seem to think that Triggers are to Databases as Events are to OOP.
Does anyone have a good analogy to explain why this is a flawed comparison, and the consequences of misapplying it?
EDIT:
Bill K. has hit it correctly, but maybe doesn't see the importance of the critical differeence between the event and the callback function that strikes me, anyway. Triggers actually cause code to execute every time the event occurs; callbacks only occur whenever one has been registered for an event (which is not true for the vast majority of events); and even then, in most cases the callback's first action is to deregister itself (or at least the callback contains a qualifcation exit so it only executes once.)
If you write a trigger, it will unfailingly execute every time the event occurs, because there's no way to register or deregister to code segment.
Triggers are a way to interpose repeating logic synchronously into the thread of execution (i.e. synchronicity). Events are a means to defer logic until later (i.e. implement asynchronicity).
There are exceptions and mitigations in both cases, but the basic patterns of triggers and callbacks are mostly opposite in intention and implementation. Often the distinction doesn't seem to have fully sunk in. (IMHO, YMMV). :D
They're not the same thing, but they're not unrelated.
In both cases, the mechanism can be described approximately as follows:
Some block of code declares "interest" for changes in state.
Your application affects some change.
The system runs the block of code in response to the change.
Perhaps a database trigger is more like a callback function that has registered interest in a specific event.
Here's an analogy: the event is a rubber ball that you throw. The trigger is a dog that chases after a thrown ball.
If there's some other difference that you have in mind that makes it "dangerous" (note: OP has edited this choice of word out of the question) to compare triggers and events, you can describe what you mean.
Triggers are a way to interpose
repeating logic synchronously into the
thread of execution (i.e.
synchronicity). Events are a means to
defer logic until later (i.e.
implement asynchronicity).
Okay, I see what you mean more clearly. But I think it's in some ways subject to the implementation. I wouldn't assume an event handler has to deregister itself; it depends on the system you're using. A UNIX signal handler, for example, has to prevent itself from catching a new signal while it's already handling one. But a Java servlet inside a Tomcat container should be thread-safe because it may be called concurrently by multiple threads. They're both event handlers, of different kinds.
Event handlers may be synchronous or asynchronous. Can a handler in a publish/subscribe system read messages that were posted recently, but prior to the handler registering its interest? Or only messages posted concurrently?
There's another important reason to treat triggers as different from event handlers: I frequently recommend against doing anything in a trigger that affects state outside the database.
For example, sending an email, writing to a file, posting to a web service, or forking a process is inappropriate inside a trigger. If for no other reason than the transaction that spawned the trigger may be rolled back, but you can't roll back those external effects. You may not even be using explicit transactions, but say you send an email in a BEFORE trigger, but the operation fails because of a NOT NULL constraint or something.
Instead, all such work should be done by code in one's application, after one has confirmed that the SQL operation was successful and the transaction committed.
It's too bad that people keep trying to do inappropriate work inside a trigger. There are senior developers at MySQL who promote UDFs to read and write data in memcached. Wow -- I just noticed these have made it into the MySQL 6.0 product!! Shocking!
So here's another attempt at an analogy, comparing triggers and events to the process of a criminal trial:
A BEFORE trigger is an allegation.
An AFTER trigger is an indictment.
COMMIT is a conviction after a guilty verdict.
ROLLBACK is an acquittal after an innocent verdict.
You only want to put the perpetrator in prison after they are convicted.
Whereas an EVENT is the crime itself.

Resources