When was the last time a Chainlink data feed was updated? Are Chainlink data feeds running? - chainlink

I'm looking to know both:
When was the last Chainlink Data Feed update?
Are the Chainlink Data Feeds running right now?

We can do this programmatically or manually:
Manually
You can see the last update, and if the feeds are running by:
Going to the specific Chainlink data feed you are curious about
Going to the address of the data feed and checking it out in a block explorer like Etherscan. (The docs will offer a link to Etherscan for you)
Going to the Contract tab -> Read Contract and clicking the aggregator
In that contract, look at the most recent update.
Programatically:
In your solidity code, just return timeStamp from your latestRoundData call. Here are the docs for reference.
function getLatestTime() public view returns (uint) {
(
uint80 roundID,
int price,
uint startedAt,
uint timeStamp,
uint80 answeredInRound
) = priceFeed.latestRoundData();
return timeStamp;
}

Related

MassTransit StateMachine correlation by Int

New to Masstransit here and I wanted to correlate my message with an int value (let's call it OrderId of type INT. I will use the same order management example from Masstransit).
I wanted to see if anyone was able to use an int to correlate events and messages in Masstransit.
I used this code in my events (both the events)
x.CorrelateById<int>(state => state.OrderId, context => context.Message.OrderId)
This compiles fine but then throws and exception
NotImplementedByDesignException. Masstransit doc says : "Redis only supports event correlation by CorrelationId, it does not support queries. If a saga uses expression-based correlation, a NotImplementedByDesignException will be thrown."
I am using CorrelateById() so not sure why I am seeing this exception. I don't see any query here (or is it this func that returns the OrderId? The correlationId has a similar expression although it takes only one argument of type: Func<ConsumeContext<TMessage>, Guid> selector)
All I want is to be able to correlate message and event by an Int prop coming to my first event of the state machine. (That I have no control over by the way).
I feel creating another GUID just for Masstransit and link it to that OrderId is not the best option here.
My other question: is there any out of the box way in MassTransit to get the all event data for a specific CorrelationId (event sourcing style). I saw there was a MassTransit.EventStoreIntegration repo on github but since we can get the last version of the state instance I thought there would be a way to see all state instances/ or messages being persisted then pulled.
Sure I can see that in the log but I would like to see only the state changes that were pushed.
Thanks
Correlation by anything other than the Guid CorrelationId property is a query. And, as you've found out, Redis doesn't support queries. If you need to correlate events using a type other than a Guid, Redis is not an option.
As per #Chris answer, and because I wanted to use an INT property and still use redis, I used Guid CorrelationId that was generated from that INT value with padded zeros.
CorrelationId = new Guid(OrderIdIntValue.ToString().PadLeft(32, '0'))
I am aware of the Guid vs int discussion in distributed systems.
For my case this works with no issues since that Int value is generated by a unique source of truth (that I consume and don't have control over).

How to store the updates of state in an offchain database?

I want to store all the blockchain data in offchain database.
rpc has a function called EXPERIMENTAL_changes, I was told that I can do that by http polling of this method but I am unable to find out how to use it.
http post https://rpc.testnet.near.org jsonrpc=2.0 id=dontcare method=EXPERIMENTAL_changes \ params:='{ "changes_type": "data_changes", "account_ids": ["guest-book.testnet"], "key_prefix_base64": "", "block_id": 19450732 }'
For example here the results give:
"change": { "account_id": "guest-book.testnet", "key_base64": "bTo6Mzk=", "value_base64": "eyJwcmVtaXVtIjpmYWxzZSwic2VuZGVyIjoiZmhyLnRlc3RuZXQiLCJ0ZXh0IjoiSGkifQ==" }
What is key_base64?
Decoding it to string gives m::39
What is m::39?
For example, I have the following state data in the rust structure.
pub struct Demo {
user_profile_map: TreeMap<u128, User>,
user_products_map: TreeMap<u128, UnorderedSet<u128>>, // (user_id, set<product_id>)
product_reviews_map: TreeMap<u128, UnorderedSet<u128>>, // (product_id, set<review_id>)
product_check_bounty: LookupMap<u128, Vector<u64>>
}
How to know anything gets changed in these variables?
Will I have to check every block id for the point the contract is deployed, to know where there is the change?
I want to store all the blockchain data in offchain database.
If so, I recommend you take a look at the Indexer Framework, which allows you to get a stream of blocks and handle them. We use it to build Indexer for Wallet (keeps track of every added and deleted access key, and stores those into Postgres) and Indexer for Explorer (keeps track of every block, chunk, transaction, receipt, execution outcome, state changes, accounts, and access keys, and stores all of that in Postgres)
What is m::39?
Contracts in NEAR Protocol have access to the key-value storage (state), so at the lowest-level, you operate with key-value operations (NEAR SDK for AssemblyScript defines Storage class with get and set operations, and NEAR SDK for Rust has storage_read and storage_write calls to preserve data).
Guest Book example uses a high-level abstraction called PersistentVector, which automatically reads and writes its records from/to NEAR key-value storage (state). As you can see:
export const messages = new PersistentVector<PostedMessage>("m");
Guest Book defines the messages to be stored in the storage with m prefix, hense you see m::39, which basically means it is messages[39] stored in the key-value storage.
What is key_base64?
As key-value storage implies, the data is stored and accessed by keys, and the key can be binary, so base64 encoding is used to enable JSON-RPC API users with a way to query those binary keys as well (there is no way you can pass a raw binary blob in JSON).
How to know anything gets changed in these variables? Will I have to check every block id for the point the contract is deployed, to know where there is the change?
Correct, you need to follow every block, and check the changes. That is why we have built the Indexer Framework in order to enable community building services on top of that (we chose to build applications Indexer for Wallet and Indexer for Explorer, but others may decide to build GraphQL service like TheGraph)

Responding to Conversations async: Graph or Bot?

I have a Teams Message extension that returns a Task response which is a medium sized embedded web view iFrame
This is working successfully; including added a custom Tab within the channel and other nice magic calls to Microsoft Graph.
What I am confused about is how to do (and this is probably my not understanding the naming of things)
insert "something" Back into the Message/Post stream which is a link to newly created Tab ... like the what you get when you have a "configureTabs" style Tab created -- there is a friendly Message (Post) in the chat pointing to this new Tab.
do I do this with Microsoft Graph or back through the Bot?
the code that does the communication may be a different service elsewhere that is acting async ... so it needs to communicate with something somewhere with context. Confused if this is the Bot with some params or Microsoft Graph with params.
how to insert an image (rather than a link to the tab) into the Message/Post stream -- but showing the image not a link off to some random URL (ie: )
could not find any samples that do this; again, will be async as per above; but the format of the message will be a Card or something custom?
So just to be clear, a Task Response is NOT the same as a Tab, albeit that they might end up hosted in the same backend web application (and also albeit that your TAB can actual bring up your Task Response popup/iframe using the Teams javascript library).
Aside from that, in order to post something back to the channel, like when the Tab is created, there are two ways to do so:
First is to use Graph Api's Create ChatMessage option (this link is just for a channel though - not sure if your tab/task apply to group chats and/or 1-1 chats as well).
2nd Option is to have a Bot be part of your application as well. Then, when you're ready to send something to the channel, you'd effectively be sending something called a "pro-active messaging". You need to have certain reference data to do this, which you would get when the bot is installed into the channel ("conversation reference", "ServiceUrl", and so on). I describe this more in my answer at Programmatically sending a message to a bot in Microsoft Teams
With regards sending the image, either of the above would work here too, in terms of how to send the image. As to the sending of an image, you'd need to make use of one of the kinds of "Cards" (basically "richer" messages than just raw text). You can learn more about this at Introducing cards and about the types of cards for Teams at Card reference. There are a few that can be used to send an image, depending on perhaps what else you want the card to do. For instance, an Adaptive Card can send an image, some text, and an action button of some sort.
Hope that helps
To close the loop for future readers.
I used the following Microsoft Graph API docs, and the posting above, and this is working: Create chatMessage in a channel and Creating a Custom Microsoft Graph call from the SDK
The custom graph call (as it is not implemented in the .NET SDK at the time of this response) looks something like:
var convoReq = $"https://graph.microsoft.com/beta/teams/{groupId}/channels/{channelId}/messages";
var body = this.TeamsMessageFactory(newCreatedTabUrl, anotherstring).ToJson();
var postMessage = new HttpRequestMessage(HttpMethod.Post, convoReq)
{
Content = new StringContent(body, System.Text.Encoding.UTF8, "application/json")
};
await _graphClient.CurrentGraphClient.AuthenticationProvider.AuthenticateRequestAsync(postMessage);
var response = await _graphClient.CurrentGraphClient.HttpProvider.SendAsync(postMessage);
if (response.IsSuccessStatusCode)
{
var content = await response.Content.ReadAsStringAsync();
return true;
}
The groupId and channelId are found elsewhere; and the TeamsMessageFactory is just some boilerplate that serialized the C# object graph for the POST request, as detailed in Create chatMessage in a channel

Should we push and pull all tables all the time for offline snyc?

I am not able to understand the concept of azure mobile offline sync using xamarin.forms. I followed this article and samples.
1) according to this, I must push all changes and pull all tables.
It looks like that push all tables according to the code here
public async Task SyncOfflineCacheAsync()
{
Debug.WriteLine("SyncOfflineCacheAsync: Initializing...");
await InitializeAsync();
// Push the Operations Queue to the mobile backend
Debug.WriteLine("SyncOfflineCacheAsync: Pushing Changes");
await Client.SyncContext.PushAsync();
// Pull each sync table
Debug.WriteLine("SyncOfflineCacheAsync: Pulling tags table");
var tagTable = await GetTableAsync<Tag>(); await tagTable.PullAsync();
Debug.WriteLine("SyncOfflineCacheAsync: Pulling tasks table");
var taskTable = await GetTableAsync<TodoItem>(); await taskTable.PullAsync();
}
Why should I get all tables everytime, isnt it an expensive operation? I debugged it, it always call GetAll function in the backend? what is the advantage of this usage?
2) If I only change Complete of TodoItem, Should I push entire item or is there a way to push only Complete with Id information? I read in the documentation that it should be possible but I cant find how.
public class TodoItem : TableData
{
public string Text { get; set; }
public bool Complete { get; set; }
public string TagId { get; set; }
}
1) according to this, I must push all changes and pull all tables.
As How offline synchronization works states about the Push operation:
Push is an operation on the sync context and sends all CUD changes since the last push. Note that it is not possible to send only an individual table's changes, because otherwise operations could be sent out of order. Push executes a series of REST calls to your Azure Mobile App backend, which in turn modifies your server database.
For Pull operation, you could leverage Incremental Sync which would retrieve the records after the latest updatedAt timestamp stored in your local SQLite table. Details you could follow this issue. Also, you could follow the Query Management section in adrian hall's book The Mobile Client.
2) If I only change Complete of TodoItem, Should I push entire item or is there a way to push only Complete with Id information?
AFAIK, you could not achieve this purpose, since the client SDK handles this processing for you and executes a set of REST calls to your mobile app backend.

Programmatically change database for heroku dataclips

We just upgraded our Heroku postgres database using the follower changeover method. We have over 50 dataclips attached to the old database, and now we need to move them over to the new database. However, doing them one by one will take a lot of time.
Is there a programatic way to update the database a dataclip is attached to, perhaps with the CLI tools?
At least once the old database has been deprovisioned, you can now (as of March 2016) reattach them to another database:
Go to https://dataclips.heroku.com/clips/recoverable. It will display your old database and a set of 'orphaned' dataclips and you can choose to transfer them to another database (in my case the promoted follower from the changeover).
Note that this only affects the dataclips that you created, it does not affect the dataclips one of your team members created and that you only had access to. So they will have to go through this process as well.
Official devcenter article: https://devcenter.heroku.com/articles/dataclips#dataclip-recovery
Thanks to Heroku CSRF measures, programmatically updating data clips is much more difficult than you might expect. You'll need to suck it up and start clicking buttons by hand, or beg their support team to do it for you, which is just as difficult.
There is no official support for programmatically moving the dataclips. That being said, you can script it out against their HTTP API.
The base URL is https://dataclips.heroku.com/api/v1/. There are three relevant endpoints:
clips /clips
resources (databases) /heroku_resources
move clip /clips/:slug/move
Find the slug of the clip you want to move, find the resource id of the new database, and make a post to the move clip endpoint:
POST /api/v1/clips/fjhwieufysdufnjqqueyuiewsr/move
Content-Type: application/json
{"heroku_resource_id":"resource123456789#heroku.com"}
I had over 300 dataclips to move. I used the following technique to update them all (essentially reverse engineering the dataclips API).
Open Chrome with Web Developer tools, Network tab.
Log into Heroku Dataclips
Observe the network call which returns all the dataclips, in JSON (https://dataclips.heroku.com/api/v1/clips). Take this response and extract out all dataclip slugs.
Update the database for one dataclip. Observe the network call which does this (https://dataclips.heroku.com/api/v1/clips/:slug/move). Right click, Copy as cURL. This is the easiest way to get all the correct parameters, since the API uses cookies for authentication.
Write a script that loops through each dataclip slug, and shells out to curl. In Ruby, this looks like:
slugs = <paste ids here>.split("\n")
slugs.each do |slug|
command = %Q(curl -v 'https://dataclips.heroku.com/api/v1/clips/#{slug}/move' -H 'Cookie: ...' --data '{"heroku_resource_id":"resource1234567#heroku.com"}')
puts command
system(command)
end
You can contact Heroku support, and they will bulk transfer the dataclips to your new database for you.
Batch working on dataclips
I've finally found a solution to work on my Dataclips as a batch using the javascript console and some scraping technique. I needed it to retrieve every dataclips. But it guess It can be updated as such:
// Go to the dataclip listing (https://data.heroku.com/dataclips).
// Then execute this script in your console.
// Be careful, this will focus a new window every 4 seconds, preventing
// you from working 4 seconds times the number of dataclips you have.
// Retrieve urls and titles
let dataclips = Array.
from(document.querySelectorAll('.rt-td:first-child a')).
map(el => ({ url: el.href, title: el.innerText }))
/**
* Allows waiting for a given timeout before execution.
* #param {number} seconds
*/
const timeout = function(seconds) {
return new Promise(resolve => {
setTimeout(() => {
resolve()
}, seconds);
})
}
/**
* Here are all the changes you want to apply to every single
* dataclip.
* #param {object} window
*/
const applyChanges = function(window) {
}
// With a fast connection, 4 seconds is OK. Dial it down if you
// have errors.
const expectedLoadTime = 4000 // ms
// This is the main loop, windows are opened one by one to ensure focus and a
// correct loading time.
for (const dataclip of dataclips) {
// This opens another window from the script, having access to its DOM.
// See https://github.com/buonomo/kazoo for a funnier example usage!
// And don't be shy to star and share :D
const externWindow = window.open(dataclip.url)
// A hack to wait for loading, this could be improved for sure.
await timeout(expectedLoadTime)
applyChanges(externWindow)
externWindow.close()
}
You'd still have to implement applyChanges yourself which I conceed is a bit tedious and I don't have time to do it know (if one does, please share!). But at least it can be done on all of your dataclips in a single function.
For an example usage of this script, you can take a look at the gist I made to scrape every dataclips and related errors.

Resources