I would like to be able to "listen" live to transactions on Solana for specific NFTs, is this possible?
I have managed to do this for Ethereum, using web3, a hash address and ABIs which allow us to listen to any NFT transaction within the NFT contract (e.g. any CryptoPunk transaction), is a similar thing possible for Solana?
I know solscan.io provides most of the data I would need (so I could scrape from there), but is there a way to do it directly from my own computer (like Ethereum)?
Absolutely! You can listen to different parts of the chain using the subscription websocket. For example, if you want to listen to all transactions by a program, you can use https://docs.solana.com/developing/clients/jsonrpc-api#programsubscribe
Related
As part of my project, I'd like to use microservices. The application is a store website where the admin can add products and the user can order and buy them.
I envision implementing four services: admin service, user service, product service, and order service.
I had trouble with handling data between multi services but it's solved by duplicating some necessary data using message brokers.
I can do this solution between product and user and order service because I need some of the data not all of them
Now, my question is about handling admin service because in this service I need to access all of the data, for example, the admin should have a list of users and the ability to add new products or update them.
how can I handle data between these services and the admin service?
should I duplicate all data inside the admin service?
should I use Rest API?
no thats wrong. it seems you want run away from the fact. in general duplication is an anti-pattern mostly in case you describe.
the way you thinking about admin-service is wrong.
because in this service I need to access all of the data
i dont think you need to have such a service. accessing the data based on users must be handled by Identity server(oidc Oauth) which is the separated service and handle the accessing end points .
for example the product-service provides 1-return product list 2-return individual product data 3-create data. the first two can access by both user and admin but the 3rd must be accessed by admin. one of identity server duty is to identify user in case of user interaction(login) with services.
ADMIN Scenario
user-client request create product endpoint(services eg:product.service).
client-app(front end app) is configed with identity server and realize there is no require identity tokens and redirect to identity server login.
NOTE: there is also identifying the client-app itself i skipped.
user-client login and get require token that based on his claims and roles and etc.
user-client request create product endpoint with tokens included in request header
endpoint (product service) receives the request and check the header (the services also configured base on identity server and user claims)
get the user claims info.
the create-product requires admin role if its there then there we go otherwise no access.
the image uses identity server 4 . there are also several kinds and also you can implement by your self using 0AUTH and oidc protocol libraries.
so the admin just request to the certain service not getting data through the separate service for this goal.
Communication between Service:
the most struggling part of microservices is the wiring it up. the wiring is directly the consequence of your design.(recommand deep study on Domain Driven Design).
asynchronous communication :
to avoid coupling between services mostly use asynchronous communication which you pass event eg:brokers like rabbitmq and kafka..etc , redis etc. in this communication the source service who send event does not care about response and not wait for it.just it always ready to listen for any result event. for example
the inventory service creates item
123|shoe-x22|22units
and this service fire event with data 123|shoe-x22(duplicate maybe or maybe not just id) to product service to create but it does not wait for response from product service that is it created successfully or not.
as you see this scenario is unreliable in case of fault and you need handle that so in this case you have to study CAP theory,SAGA,Circuit-breaker.
synchronous communication :
in this case the service insist to have response back immediately. this push service to become more coupling. if you need performance then you can use gRPC communication other wise simple api call to the certain service. in case of gRPC i recommand using libraries like MassTransit
which also can be used for implementingf gRPC with minimum coupling.
Some of Requests need data from multiple services
if you are in such situation you have two options.
mostly microservices architecture using APIGATE WAY (EG:nginx,OCELOT,etc)
which provide reverse-proxy,load balancing,ssl terminations etc. one of its ability is to merge the multiple responses from a request.but it just merge them not changing the data structure of response.
in case of returns desire response data structure you may create an Aggregator service which itself calls other two, gathers data and wrap it in desire format and return it.
so in the end still the Domain Driven Design is the key and i think i talked tooooo much. hope help you out there.
Soon I'll start a project based on a Microservice Architecture and ones of the components I need to develop is a Worker Service (or Daemon).
I have some conceptual questions about this.
I need to create a worker service that send emails and sms. This worker service need the data to send this emails. Also, I need to create a micro service that allow users to create a list of emails that need to be sanded by this Worker service. But both of then need to consume data from the same database.
In my worker service I should consume a micro service resource to get the data or it's ok that this worker service have a connection to the same database that my micro service?
Or is best that my worker service also has the api endpoints to let the users create new lists of emails, add or modify configuration and all the other functionalities i need to implement? This sound like a good idea, but I'll get a component with two responsibilities, so I have some doubts about that.
Thanks in advance.
Two microservices sharing the connection to the same database is usually a bad idea. Because each service should be the owner of its own data model and no one else should access it directly. If a service needs data of the domain of another service it should get it calling the owner via API or replicating the model in a read-only way in its own dabase and update it using events for example.
However, I think that for your current use case the best option is to provide the worker with all the information that it needs to send an email, (address, subject, body, attached files...) so the only responsibility of the worker will be to send emails and not to fetch the information.
It could provide also the functionality to send emails in batches. In the end, the responsibility of the service will be only one "To send emails" but it can provide different ways to do it (single emails, batches, with attached files... etc)
I'm currently working on a project that is a sort of an extension to an existing project.
Existing Project:
A product that is already there and has a pub/sub system of its own. It uses GraphQL.
Let's take an example.
There's a group (GROUP_1) and some users are a part of that group.
Let's say user A calls a mutation, the other users' sessions subscribed to that mutation get an update regarding the same(that A made these changes). The changes are reflected in their frontend accordingly.
What do we need?
Instead of user A making the change, we want to update when a third party service notifies us to make the change.
The thing is that the third-party service uses MQTT protocol and will give us a topic (for GROUP_1) to which it will publish the messages and we need to subscribe to it. Upon receiving the message if the message satisfies our condition we will call the mutation on behalf of the user the message specifies.
Problem
So we need the client to listen to that topic forever.
And there can be an addition to the topics provided. New groups can be added in the future. So we will have different topics for different groups(GROUP_1, GROUP_2, etc)
Research that I did
To have a client that listens to a topic permanently and we can add to that client new topics as we generate new GROUPS.
And then let this client update the GraphQL API eventually reflecting changes in the session of the users in that GROUP.
I think that's not the way MQTT should be used, we should make changes in the existing project but that is not in our hands as of now.
Constraint
We can't change the existing workflow of GraphQL mutation and subscription that the existing project has.
Questions
Is there a better approach to do this with the given constraint?
If not then how should we go about creating a client that listens permanently. Shall we have a server that's up for the same or is there a better way to do this?
Maybe I'm not understanding your post, but the solution seems pretty simple: Create a new client that subscribes to what the "third party service" publishes to, and then publish to the Topic that the existing clients subscribe to, all the info that was passed in by the 3rd party. The beauty of a Pub/Sub architecture is that adding functionality to an existing system is done by just adding a new set of clients...subscribers, publishers, or both. The existing system doesn't have to change at all, if you can reuse the existing Topics and data formats. So you create a "Proxy" that takes in the new info, and publishes out in the old format.
i'm quite new to Logic Apps. I got the task to make an auto reply function within Logic Apps that integrates with Exchange Online. Now I already performed this task using Outlook, but I have to be able to apply it to multiple mailboxes or even the entire company using Exchange. I'm about to get access to the Exchange Admin Center soon, but I don't really know how to start due to the fact that there is no simple way to make a connection to Exchange using Logic Apps. After some research, I think it's necessary for me to somehow make use of a REST API (I also read about the use of Exchange Web Services) to get the information I need, but my knowledge about this is quite small. I guess I'm gonna have to use a program like Postman to request information, so that I can start creating Custom Connectors to Exchange. If anybody has some understanding about this, feel free to reply and help me out! I will forever be gratefull!
There are several different approaches you could take to this if you (or probably they in your case) want your logic app to do all the work then you should use the Graph API rather then EWS (while its possible because its older API you'll loose marks on your assignment) have a look at http://martink.me/articles/using-microsoft-graph-in-logic-apps which covers the basics of what to do. To Get access to mailboxes tenant wide then you need to assigned Application Permission and get certificate (and store that in the KeyVault on Azure etc).
You can do this using Inbox Rules https://learn.microsoft.com/en-us/graph/api/mailfolder-post-messagerules?view=graph-rest-1.0&tabs=http and the Exchange Server will do all the work when it comes to doing the Auto-response (and has loop detection logic already) and your logic app then just need to do the Creation and management of the Rules.
But I would suggest you clarify with the person who assigned you the task whether they want the logic app to do the response (eg using the Graph API) or if its okay for the Exchange Server to do this for then (which should be more reliable).
You can also create Rules via the Exchange Admin Center and you could probably also through in Power Automate into the mix to do Autoresponse's so I'd clarify what they want so you don't waste time building something they don't want.
I would like to ask if it is possible to use or has the ros2 already been used with cloud robotics?
I am currently developing a project using cloud robotics, and one of the goals is to use ros2 to send orders to the robot.
My question is how to send orders to the robot remotely using ros2 in k8s.
If someone can give me some reference it would be helpful.
If you specifically want to use ROS2 to communicate between the cloud and the robot, I'm afraid it won't help. You could consider using a declarative API to reliably send orders from the cloud to the robot. You'd need to:
create a custom resource definition (CRD) to represent your orders
send orders from the cloud by creating a custom resource
create a controller on the robot, which looks at the orders and executes them
There's more detail in the declarative API tutorial.
If you want to avoid this indirection and use ROS2, this ROS forum post suggests using a VPN that can forward multicast traffic.