I want to store all the blockchain data in offchain database.
rpc has a function called EXPERIMENTAL_changes, I was told that I can do that by http polling of this method but I am unable to find out how to use it.
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=EXPERIMENTAL_changes \ params:='{ "changes_type": "data_changes", "account_ids": ["guest-book.testnet"], "key_prefix_base64": "", "block_id": 19450732 }'
For example here the results give:
"change": { "account_id": "guest-book.testnet", "key_base64": "bTo6Mzk=", "value_base64": "eyJwcmVtaXVtIjpmYWxzZSwic2VuZGVyIjoiZmhyLnRlc3RuZXQiLCJ0ZXh0IjoiSGkifQ==" }
What is key_base64?
Decoding it to string gives m::39
What is m::39?
For example, I have the following state data in the rust structure.
pub struct Demo {
user_profile_map: TreeMap<u128, User>,
user_products_map: TreeMap<u128, UnorderedSet<u128>>, // (user_id, set<product_id>)
product_reviews_map: TreeMap<u128, UnorderedSet<u128>>, // (product_id, set<review_id>)
product_check_bounty: LookupMap<u128, Vector<u64>>
}
How to know anything gets changed in these variables?
Will I have to check every block id for the point the contract is deployed, to know where there is the change?
I want to store all the blockchain data in offchain database.
If so, I recommend you take a look at the Indexer Framework, which allows you to get a stream of blocks and handle them. We use it to build Indexer for Wallet (keeps track of every added and deleted access key, and stores those into Postgres) and Indexer for Explorer (keeps track of every block, chunk, transaction, receipt, execution outcome, state changes, accounts, and access keys, and stores all of that in Postgres)
What is m::39?
Contracts in NEAR Protocol have access to the key-value storage (state), so at the lowest-level, you operate with key-value operations (NEAR SDK for AssemblyScript defines Storage class with get and set operations, and NEAR SDK for Rust has storage_read and storage_write calls to preserve data).
Guest Book example uses a high-level abstraction called PersistentVector, which automatically reads and writes its records from/to NEAR key-value storage (state). As you can see:
export const messages = new PersistentVector<PostedMessage>("m");
Guest Book defines the messages to be stored in the storage with m prefix, hense you see m::39, which basically means it is messages[39] stored in the key-value storage.
What is key_base64?
As key-value storage implies, the data is stored and accessed by keys, and the key can be binary, so base64 encoding is used to enable JSON-RPC API users with a way to query those binary keys as well (there is no way you can pass a raw binary blob in JSON).
How to know anything gets changed in these variables? Will I have to check every block id for the point the contract is deployed, to know where there is the change?
Correct, you need to follow every block, and check the changes. That is why we have built the Indexer Framework in order to enable community building services on top of that (we chose to build applications Indexer for Wallet and Indexer for Explorer, but others may decide to build GraphQL service like TheGraph)
Related
I don't know how can I write a smart contract in Solana that after executing the logic, returns an array of integers, strings, ... to the client, and how can I fetch it using Web3?
There's a syscall available to on-chain programs called set_return_data, which puts data into a buffer that can be read by the higher-level programs using get_return_data. This all mediated through opaque byte buffers, so you'll need to know how to decode the response.
If you want to fetch the data from the client side, you can simulate the transaction and read the data back from the return_data field in the response: https://edge.docs.solana.com/developing/clients/jsonrpc-api#results-50
The RPC support in simulated transactions is very new in version 1.11, but the return data is available in earlier versions.
Source code for set_return_data at https://github.com/solana-labs/solana/blob/658752cda710cb358d7ccbbc2cee06bf8009c2d4/sdk/program/src/program.rs#L102
Source code for get_return_data at https://github.com/solana-labs/solana/blob/658752cda710cb358d7ccbbc2cee06bf8009c2d4/sdk/program/src/program.rs#L117
So, programs do not return data (other than success or failure).
However; most programs write data to a program owned account's data field and this could be read from client apps (Rust, Python, TS/JS, etc.).
If using the Solana web3 library, you can call getAccountInfo on the Connection object. This will return the byte array of the account. You will then need to deserialize that data. You have to know how the program serializes the data to reverse it successfully.
Check the Solana Cookbook for overview using borsh https://solanacookbook.com/guides/serialization.html#how-to-deserialize-account-data-on-the-client
I'm reading about caching strategies such as cache-aside, write-through, write-back, ... In the specific cases of write-through and write-back, it is implied that the cache itself is responsible for writing to the database and the event queue, respectively (For full context, here is the article - https://github.com/donnemartin/system-design-primer#when-to-update-the-cache)
For example, write-through is illustrated as
Application code:
set_user(12345, {"foo":"bar"})
Cache code:
def set_user(user_id, values):
user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
cache.set(user_id, user)
For now, let's assume we're using Redis.
In the concrete example above, is the hypothetical set_user function invoked on the Redis client's machine, or on the Redis server?
Now, there seems to be ways to invoke custom logic on the Redis server, e.g., by writing Lua scripts, but I'm skeptical that that's done in practice in order to implement this caching strategy, partly because I've never heard of anyone doing it.
I've seen other articles showing this strategy is implemented solely on the Redis client's machine, but I'm not sure what resources to believe at this point.
Thanks for any help!
It's part of the application. In fact, it would be more appropriate to call the example "data store code", instead of "cache code". The set_user method belongs to a base UserStore class, with different implementations based on data store type, write policy etc. For "write-through", it would be:
class WriteThroughUserStore(UserStore):
def __init__(self, cache_user_store, db_user_store):
self.cache_user_store = cache_user_store
self.db_user_store = db_user_store
def get_user(self, user_id):
return self.cache_user_store.get_user(user_id)
def set_user(self, user):
self.db_user_store.set_user(user)
self.cache_user_store.set_user(user)
The key point of "write-through" is that the write operation is confirmed complete only after writing data to both cache and database synchronously. The order does not matter: you could update cache first, or update database first, or even do them in parallel.
I get a file with 4000 entries and debatch it, so i dont lose the whole message if one entry has corrupting data.
The Biztalkmap is accessing an SQL server, before i debatched the Message I simply cached the SLQ data in the Map, but now i have 4000 indipendent maps.
Without caching the process takes about 30 times longer.
Is there a way to cache the data from the SQL Server somewhere out of the Map without losing much Performance?
It is not a recommendable pattern to access a database in a Map.
Since what you describe sounds like you're retrieving static reference data, another option is to move the process to an Orchestration where the reference data is retrieved one time into a Message.
Then, you can use a dual input Map supplying the reference data and the business message.
In this patter, you can either debatch in the Orchestration or use a Sequential Convoy.
I would always avoid accessing SQL Server in a map - it gets very easy to inadvertently make many more calls than you intend (whether because of a mistake in the map design or because of unexpected volume or usage of the map on a particular port or set of ports). In fact, I would generally avoid making any kind of call in a map that has to access another system or service, but if you must, then caching can help.
You can cache using, for example, MemoryCache. The pattern I use with that generally involves a custom C# library where you first check the cache for your value, and if there's a miss you check SQL (either for the paritcular entry or the entire cache, e.g.:
object _syncRoot = new object();
...
public string CheckCache(string key)
{
string check = MemoryCache.Default.Get(key) as string;
if (check == null)
{
lock (_syncRoot)
{
// make sure someone else didn't get here before we acquired the lock, avoid duplicate work
check = MemoryCache.Default.Get(key) as string;
if (check != null) return check;
string sql = #"SELECT ...";
using (SqlConnection conn = new SqlConnection(connStr))
{
conn.Open();
using (SqlCommand cmd = conn.CreateCommand())
{
cmd.CommandText = sql;
cmd.Parameters.AddWithValue(...);
// ExecuteScalar or ExecuteReader as appropriate, read values out, store in cache
// use MemoryCache.Default.Add with sensible expiration to cache your data
}
}
}
}
else
{
return check;
}
}
A few things to keep in mind:
This will work on a per AppDomain basis, and pipelines and orchestrations run on separate app domains. If you are executing this map in both places, you'll end up with caches in both places. The complexity added in trying to share this accross AppDomains is probably not worth it, but if you really need that you should isolate your caching into something like a WCF NetTcp service.
This will use more memory - you shouldn't just throw everything and anything into a cache in BizTalk, and if you're going to cache stuff make sure you have lots of available memory on the machine and that BizTalk is configured to be able to use it.
The MemoryCache can store whatever you want - I'm using strings here, but it could be other primitive types or objects as well.
I'm new to the Go language (Golang) and I'm writing a web-based application. I'd like to use session variables, like the kind in PHP (variables that are available from one page to the next and unique for a user session). Is there something like that in Go? If not, how would I go about implementing them myself? Or what alternatives methods are there?
You probably want to take a look at gorilla. It has session support as documented here.
Other than that or possibly one of the other web toolkits for go you would have to roll your own.
Possible solutions might be:
goroutine per user session to store session variables in memory.
store your variables in a session cookie.
use a database to store user session data.
I'll leave the implementation details of each of those to the reader.
Here's an alternative in case you just want session support without a complete web toolkit.
https://github.com/bpowers/seshcookie
Here's another alternative (disclosure: I'm the author):
https://github.com/icza/session
Quoting from its doc:
This package provides an easy-to-use, extensible and secure session implementation and management. Package documentation can be found and godoc.org:
https://godoc.org/github.com/icza/session
This is "just" an HTTP session implementation and management, you can use it as-is, or with any existing Go web toolkits and frameworks.
Overview
There are 3 key players in the package:
Session is the (HTTP) session interface. We can use it to store and retrieve constant and variable attributes from it.
Store is a session store interface which is responsible to store sessions and make them retrievable by their IDs at the server side.
Manager is a session manager interface which is responsible to acquire a Session from an (incoming) HTTP request, and to add a Session to an HTTP response to let the client know about the session. A Manager has a backing Store which is responsible to manage Session values at server side.
Players of this package are represented by interfaces, and various implementations are provided for all these players.
You are not bound by the provided implementations, feel free to provide your own implementations for any of the players.
Usage
Usage can't be simpler than this. To get the current session associated with the http.Request:
sess := session.Get(r)
if sess == nil {
// No session (yet)
} else {
// We have a session, use it
}
To create a new session (e.g. on a successful login) and add it to an http.ResponseWriter (to let the client know about the session):
sess := session.NewSession()
session.Add(sess, w)
Let's see a more advanced session creation: let's provide a constant attribute (for the lifetime of the session) and an initial, variable attribute:
sess := session.NewSessionOptions(&session.SessOptions{
CAttrs: map[string]interface{}{"UserName": userName},
Attrs: map[string]interface{}{"Count": 1},
})
And to access these attributes and change value of "Count":
userName := sess.CAttr("UserName")
count := sess.Attr("Count").(int) // Type assertion, you might wanna check if it succeeds
sess.SetAttr("Count", count+1) // Increment count
(Of course variable attributes can be added later on too with Session.SetAttr(), not just at session creation.)
To remove a session (e.g. on logout):
session.Remove(sess, w)
Check out the session demo application which shows all these in action.
Google App Engine support
The package provides support for Google App Engine (GAE) platform.
The documentation doesn't include it (due to the +build appengine build constraint), but here it is: gae_memcache_store.go
The implementation stores sessions in the Memcache and also saves sessions to the Datastore as a backup in case data would be removed from the Memcache. This behaviour is optional, Datastore can be disabled completely. You can also choose whether saving to Datastore happens synchronously (in the same goroutine) or asynchronously (in another goroutine), resulting in faster response times.
We can use NewMemcacheStore() and NewMemcacheStoreOptions() functions to create a session Store implementation which stores sessions in GAE's Memcache. Important to note that since accessing the Memcache relies on Appengine Context which is bound to an http.Request, the returned Store can only be used for the lifetime of a request! Note that the Store will automatically "flush" sessions accessed from it when the Store is closed, so it is very important to close the Store at the end of your request; this is usually done by closing the session manager to which you passed the store (preferably with the defer statement).
So in each request handling we have to create a new session manager using a new Store, and we can use the session manager to do session-related tasks, something like this:
ctx := appengine.NewContext(r)
sessmgr := session.NewCookieManager(session.NewMemcacheStore(ctx))
defer sessmgr.Close() // This will ensure changes made to the session are auto-saved
// in Memcache (and optionally in the Datastore).
sess := sessmgr.Get(r) // Get current session
if sess != nil {
// Session exists, do something with it.
ctx.Infof("Count: %v", sess.Attr("Count"))
} else {
// No session yet, let's create one and add it:
sess = session.NewSession()
sess.SetAttr("Count", 1)
sessmgr.Add(sess, w)
}
Expired sessions are not automatically removed from the Datastore. To remove expired sessions, the package provides a PurgeExpiredSessFromDSFunc() function which returns an http.HandlerFunc. It is recommended to register the returned handler function to a path which then can be defined as a cron job to be called periodically, e.g. in every 30 minutes or so (your choice). As cron handlers may run up to 10 minutes, the returned handler will stop at 8 minutes
to complete safely even if there are more expired, undeleted sessions. It can be registered like this:
http.HandleFunc("/demo/purge", session.PurgeExpiredSessFromDSFunc(""))
Check out the GAE session demo application which shows how it can be used.
cron.yaml file of the demo shows how a cron job can be defined to purge expired sessions.
Check out the GAE session demo application which shows how to use this in action.
I'm currently learning node.js by developing a small chat application over TCP (classic case).
What I would like to do now is to extend this application to use static users:
All logic around messaging should be based on users rather than
sessions.
Each session therefore needs to be saved along with a
reference to the user (so that the application can send messages in
the correct session based on user name).
I have looked into Redis to store this data but not been successful so far.
tcpServer.on 'connection', (tcpSocket) -> tcpSocket.write "Welcome"
redis.hmset("sessions", userid, tcpSocket)
tcpSocket.on "data", (data) ->
redis.hvals "sessions", (err, repl) ->
repl.forEach (reply, y) ->
reply.Write("Data is written to clients!")
tcpServer.listen 7000
Current error message: TypeError: Object [object Object] has no method 'write'
This indicates that I cannot store a tcpSocket as I'm trying to do right now. Can someone tell me if I can make minor adjustments to make this right or should I rethinkt this solution?
Redis supports very few data types: string, lists, sets, sorted lists and hash tables. That's it. Some other types, like Date, can be easily converted to/from string, so it can be used too. But socket is a complex internal object, so you most probably can't convert it into string and therefore can't store in Redis.
Since socket objects are to complex to store in Redis it seems like the way to go is to add identifier to the object according to link below.
This identifier could then of course be saved in a database to create realtions between different objects etc.
How to uniquely identify a socket with Node.js