I am implementing something like Facebook reactions via getstream.io.
Posting and removing activities ("reactions") works fine.
Basics:
While implementing the socket feature (faye) of getstream to reflect feed changes in realtime, I saw that the format of a socket message for new activities differs from the one for deleted activities.
Example having ONE reaction each in deleted and new:
{
"deleted": [
"d5b1aee0-5a1a-11e6-8080-80015eb61bf9",
"49864f80-5a19-11e6-8080-80015eb61bf9",
"47fe7700-5a19-11e6-8080-80015eb61bf9",
"4759ab80-5a19-11e6-8080-80015eb61bf9",
"437ce680-5a19-11e6-8080-80015eb61bf9"
],
"new": [
{
"actor": "user:55d4ab8a11234359b18f06f6:Manuel Reil",
"verb": "support",
"object": "control:56bf2fb884e5c0756e910755",
"target": null,
"time": "2016-08-04T11:48:23.168000",
"foreign_id": "55d4ab8a11234359b18f06f6:support:56bf2fb884e5c0756e910755",
"id": "58d9c000-5a39-11e6-8080-80007c3c41d8",
"to": [],
"origin": "control:56bf2fb884e5c0756e910755"
}
],
"published_at": "2016-08-04T11:48:23.546708+00:00"
}
I subscribe to the aggregated feed that follows a flat feed.
I add and remove activities via the flat feed.
Subscriptions to the flat and the aggregated feed both return the same message when adding and removing activities.
Challenges I am facing:
When I remove ONE activity (via foreign_id) - why do appear 5 ids in the deleted array?
I need to have the foreign_id to reflect changes in the app while digesting the socket message from gestream.io. This works fine for new activities as the full object is sent (see example above). However, for the deleted activities they are missing as just an array of ids is sent.
Potential approaches:
Can I somehow configure my getstream faye subscription or config to (also) return foreign_idsfor the deleted items?
I could try to fetch those ids in addition based on the socket message, but this seems almost ridiculous.
Thank you very much.
Removing activities via foreign_id deletes all the activities with the given foreign_id present in the feed. This is one of the main upsides of using the foreign_id field, it allows for cascading deletes to a group of activities. (eg. Post and Likes is a typical use-case where you want to delete one Post and all Likes related to it).
Another advantage of using foreign_id is that you don't have to keep track of the ID generated by Stream.
You should be able to solve your first problem by picking a value for the foreign_id field that is unique (eg. the object ID from your database), this way you can still delete easily and avoid the cascaded delete behavior.
Regarding your second problem, if you are updating your UI based on the the real-time updates it also means you already read from the same feed, and that you have the list of activities with their IDs and foreign_ids. Selecting activities from activity_id should be just a matter of creating some sort of in memory map (eg. add an data-activity_id attribute to your DOM).
Related
I use laravel and vue to menage some data from db, and i return json format from laravel controller to vue js. I just want to hide the response data from network tab or to mask them maybe. I didnt does this before. I mean, when i open network tab i see a request get-users?page=1 and if i double click open this urlhttp://127.0.0.1:8000/admin/users/get-users?page=1 witch show me all data like this
{
"data": [
{
"id": 1,
"name": "Admin",
"email": "admin#admin.com",
"email_verified_at": null,
"last_online_at": "2022-12-02 10:27:20",
is there any way to mask this data to somethink like this
"data": [
{
success: true,
response: null //or true
}
this is how i return users data
return new UserResource(User::paginate($paginate));
i want hide data from this tab
http://127.0.0.1:8000/admin/users/get-users?page=1
Requests will be shown.
This cannot be stopped, the application is making requests and this will be logged to the network tab by the browser, if there are security concenrns you should be handling this a different way. Do not send data to the client that they should not be allowed access to in the first place.
To try and ensure security run over HTTPS on the off chance to data gets intercepted, that way it will not be usable data. Most data will be provided by the user. Meaning in should not need to be hidden within the network tab.
Worst case scenario, someone physically sits at their computer and reads what is in the network tab, but this is a scenario that cant be accounted for when developing applications. You could base64 encode data that is being sent to and from so it is less readable to anyone who should see the network tab. Here are some resources to have a look through related to the question.
Base64
I'd like to know what best practices exist around subscribing to changes and publishing them to the user. This is a pretty broad and vaguely worded question. Therefore, allow me to elaborate on this using an example.
Imagine the following (simplified) chat-like application:
The user opens the app and sees the home screen.
On this home screen a list of chat-groups is fetched and displayed.
Each chat-group has a list of users (members).
The user can view this list of members.
Each user/member has at least a first name available.
The user can change its name in the settings.
And now the important part: When this name is changed, every user that is viewing the list of members, should see the name change in real-time.
My question concerns the very last point.
Let's create some very naive pseudo-code to simulate such a thing.
The client should at least subscribe to something. So we could write something like this:
subscribeToEvent("userChanged")
The backend should on its part, publish to this event with the right data. So something like this:
publishDataForEvent("userChanged", { userId: "9", name: "newname" } )
Of course there is a problem with this code. The subscribed user now gets all events for every user. Instead it should only receive events for users it is interested in (namely the list of members it is currently viewing).
Now that is the issue I want to know more about. I could think of a few solutions:
Method 1
The client subscribes to the event, and sends with it, the id of the group he is currently viewing. Like so for example:
subscribeToEvent("userChanged", { groupId: "abc" })
Consequently, on the backend, when a user changes its name, the following should happen:
Fetch all group ids of the user
Send out the event using those group ids
Something like this:
publishDataForEvent("userChanged", { userId: "9", name: "newname" }, { groupIds: ["abc", "def" })
Since the user is subscribed to a group with id "abc" and the backend publishes to several groups, including "abc", the user will receive the event.
A drawback of this method is that the backend should always fetch all group ids of the user that is being changed.
Method 2
Same as method 1. But instead of using groupIds, we will use userIds.
subscribeToEvent("userChanged", { myUserId: "1" })
Consequently, on the backend, when a user changes its name, the following should happen:
Fetch all the user ids that relate to the user (so e.g. friendIds based on the users he shares a group with)
Send out the event using those friendIds
Something like this:
publishDataForEvent("userChanged", { userId: "xyz", name: "newname" }, { friendIds: ["1", "2" })
An advantage of this is that the subscription can be somewhat more easily reused. Ergo the user does not need to start a separate subscription for each group he opens, since he is using his own userId instead of the groupId.
Drawback of this method is that it (like with method 1 but probably even worse) potentially requires a lot of ids to publish the event to.
Method 3
This one is just a little different.
In this method the client subscribes on multiple ids.
An example:
On the client side the application gathers all users that are relevant to the current user. For example, that can be done by gathering all the user ids of the currently viewed group.
subscribeToEvent("userChanged", { friendIds: ["9", "10"] })
At the backend the publish method can be fairly simple like so:
publishDataForEvent("userChanged", { userId: "9", name: "newname" }, { userId: "9" } )
Since the client is subscribed to user with userId "9", amongst several users, the client will receive this event.
Advantage of this method is that the backend publish method can be held fairly simple.
Drawback of this is that the client needs quite some logic to subscribe to the right users.
I hope that the examples made the question more clear. I have the feeling I am missing something here. Like, major chat-app companies, can't be doing it one of these ways right? I'd love to hear your opinion about this.
On a side note, I am using graphql as a backend. But I think this question is general enough to not let that play a role.
The user can change its name in the settings.
And now the important part: When this name is changed, every user that is viewing the list of members, should see the name change in real-time.
I assume the user can change his name via a FORM. The contents of that form will be send with a HTTP-Reuqest to a backand script that will do the change in a DB like
update <table> set field=? where userid=?
Preferred
This would be the point where that backend script would connect to your web socket server and send a message like.
{ opcode:'broadcast', task:'namechange', newname='otto' ,userid='47111' }
The server will the broadcast to all connected clients
{task:'namechange', newname='otto' ,userid='4711' }
All clients that have a relationship to userid='4711' can now take action.
Alternative 1
If you cant connect your backend script to the web socket server the client might send { opcode:'broadcast', task:'namechange', newname='otto' ,userid='47111' }
right before the FORM is trasnmitted to the backend script.
This is shaky because if anything goes wrong in the backend, the false message is already delivered, or the client might die before the message goes out, then no one will notice the change.
I connected to sync gate way via Api ,but I don't know how to filter some data and use them in my laravel project.
You are not filtering by channel in the syncgateway config, the filtering is the outcome of the sync-function, but it is more of a passive result of attaching a channel to a document.
I have no idea which version you are using because your question lacks it, however configuration is pretty straight forward.
you basically have 2 options of attaching a channel to a document, the second is overriding the first:
1. don't have any sync function in the config file and just rely on a "channels" property, that property will make the document sync to the appropriate channels.
for example:
{ "name": "Duke", "lastName": "Nukem", "channels": ["DukeNukem","Duke"] }
2. Have a sync function in the config file:
For the document:
{ "name": "Duke", "lastName": "Nukem" }
you might have that sync function that will do the same:
function(doc, oldDoc){
if (doc.name && doc.lastName) {
channel(doc.name);
channel(doc.name + doc.lastName);
}
}
please note that you will have to grant permission to the user to able to see a channel.
in the client side you would need that user with the permission, and if you are not filtering for a channel - you will get the docs whenever you sync.
please read more here
Here is a Swift example on the client side on how to use the "channels" to route data:
let manager = CBLManager()
let database = try! manager.databaseNamed(dbName)
let pull = self.database.createPullReplication(url)
// We need to sync channels
pull.channels = ["somechannels"]
pull.start()
A concrete example on a Store management application, every documents belong to a Store would be saved with channels contain the storeID.
On the client side, when syncing we will put the storeID inside channels so that the sync will get only the documents belong to that Store. (we are using the default sync function)
Note that, there are security concern that you need to take into consideration, read more here.
I am hitting an API to return stats of some websites, I analyse the returned values and add some of the sites to an array.
I then construct a slack message and add the array of sites to the fields section like this;
"attachments": [
{
"fallback": "",
"color": "#E50000",
"author_name": "title",
"title": "metrics recorded",
"title_link": "https://mor47992.live.dynatrace.com/#dashboard;id=cc832197-3b50-489e-b2cc-afda34ab6018;gtf=l_7_DAYS",
"text": "more title info",
"fields": sites,
"ts": Date.now() / 1000 | 0
}
]
This is all in a lambda which is triggered every 5 minutes, the first message comes through fine.
however subsequent messages just append to the fields section of the original message so it looks like I have delivered duplicate content in the message. is there a way to force each hit to the incoming web hook to post as a brand new message to slack?
here is an example of a followup message, notice the duplicate content.
No. Its a "feature" of Slack that is will automatically combine multiple message from the same user / bot without restating the user name if they are send within a short time frame.
To separate the attachments in your case would suggest to add an introduction text. Either via text property of the message (on same level than attachments property). Or by adding a pretext to each attachment.
I am successfully retrieving user emails from our Exchange Online server, via REST requests and the ADAL library. We have been retrieving and processing calendar-event emails, and their associated calendar events, which are generated by Outlook, GMail/Google-Calendar, iPad, iPhone and Android devices.
We have been looking in the ClassName property for "meeting.request" or "meeting.cancelled", but those values were removed a week ago and have not returned. We have now been looking for a non-Null MeetingMessageType property (MeetingRequest or MeetingCancelled), but as of today, those properties have also been removed. This is incredibly valuable interop data but I don't know where to look next.
How can I associate a retrieved json Message object from a user's mailbox or a shared mailbox, with an (Exchange...) associated Calendar event? We can process Meeting creations, invitations, acceptances etc. with message items which we then purge; Whereas, querying the calendars for new and updated events is much much more intensive, since we certainly can't purge calendar events off the calendar as we process them!
Can I query the calendar for associated message Ids? I can't imagine this would be possible to do for every message.
Thanks!
Edit: #Venkat Thanks. Mail items are infinitely more process-able than emergent calendar-event standards. As an Exchange dev, I have to ask-- do you really need an example of how I can process a mail-bound event better as a mail item rather than a calendar event item? Ok that's cool, here is one:
One thing we are doing is cc/bcc-ing mail/mtg-requests to specific mailboxes for processing (or using client and server rules to accomplish the same thing). We can then poll individual mailboxes, shared mailboxes, and/or collections of mailboxes to auto-respond or not... and to move to specific calendars or not, and to redirect to specific users or not, and to change header information during routing for further category classification or not, and to even replace recipients/attendees or not etc. etc. To do the same thing with REST calendar requests, we'd lose all server rules automation, all client rules automation, procedural auto-respond, all headers manipulation (data-insertion/extraction), etc. We're just trying to push events to a cloud app, for certain collections of users, using shared mailboxes which redirect to specific daemon accounts, which hold calendars for specific subsets of our users/clients.
Like everyone else, we are trying to integrate with cloud apps. So we need procedural parsing, data-manipulation, and pushing of both mail and calendar items. So, for one thing, we have the massive advantages of server mail-processing rules, client/user mail rules, mail header modifications (easy item data modification), mail auto-respond control, and blind recipients. Calendar events don't get any of those things. For a second thing, we have a much more robust mail folders taxonomy than calendar(s) taxonomy (which is almost non-existent). For a third thing, Calendar event mail items are user-specific and have less persistent value than shared calendar events. Finally, if we're processing mail items any way-- why not at least have an eventId for events? Why take out ALL interop information? Having an eventId completely eliminates the need for a query against a calendar endpoint returning multiple items, and adds no addition queries against a mail endpoint.
Google includes an attached ics. Even if you eliminate the event item attachment from the API mail item, I don't see why you have to remove the eventId. Processing calendar events by mail is nothing new, but we have to have a data-binding between the two objects, to do it. That is all.
My Exchange Server still knows when a mail item is a calendar event. It just won't tell ~me~, any more, if I ask it over REST. So, as a brutish work-around I can set up a mail rule to add a category of "api_calendarEvent" for all incoming messages that are of type "Meeting Request". Then, after making a REST call for mail items, I can parse categories to manually repopulate a class property. But why remove the attachment, classname, MeetingMessageType, and EventId altogether from the mail item? Even if I made a server rule to re-categorize certain mail items in certain mailboxes as calendar events, and was able to know when to poll a calendar to get event details-- would I always know what calendar to poll, to find that event? All we'd need to avoid blind polling across multiple calendars, is for you to retain the EventId and/or ClassName. Then we'd also have massive automation of calendar processing again, which has currently been removed from the API.
Thanks!
Thanks for the detailed response to my comment! Your scenario is something we wish to support. As part of schema clean up, we removed event ID and meeting message type from message, as it was being included for every message. For calendar invites and responses, we plan to add back 2 properties:
1. A navigation link to the related event, so you can click to the event, and take actions on it, if you have calendar permissions.
2. A calendar response type e.g. Meeting Accepted, Meeting Declined etc., so you know what type of the page you have received.
We are currently working on the design and we don't have the exact timeline to share. But, we will update our documentation as soon as we have this API available.
[UPDATE] We now return calendar event invites and responses as EventMessage which is a subclass of Message. This entity includes a property called MeetingMessageType and a navigation link to the corresponding Event on the user's calendar. See below for an example:
{
#odata.context: "https://outlook.office365.com/api/v1.0/$metadata#Users('<snipped>')/Messages/$entity",
#odata.type: "#Microsoft.OutlookServices.EventMessage",
#odata.id: "https://outlook.office365.com/api/v1.0/Users('<snipped>')/Messages('<snipped>')",
#odata.etag: "<snipped>",
Id: "<snipped>",
ChangeKey: "<snipped>",
Categories: [ ],
DateTimeCreated: "2015-04-08T14:37:55Z",
DateTimeLastModified: "2015-04-08T14:37:55Z",
Subject: "<snipped>",
BodyPreview: "",
Body: {
ContentType: "HTML",
Content: "<snipped>"
},
Importance: "Normal",
HasAttachments: false,
ParentFolderId: "<snipped>",
From: {
EmailAddress: {
Address: "<snipped>",
Name: "<snipped>"
}
},
Sender: {
EmailAddress: {
Address: "<snipped>",
Name: "<snipped>"
}
},
ToRecipients: [{
EmailAddress: {
Address: "<snipped>",
Name: "<snipped>"
}
}],
CcRecipients: [ ],
BccRecipients: [ ],
ReplyTo: [ ],
ConversationId: "<snipped>",
DateTimeReceived: "2015-04-08T14:37:55Z",
DateTimeSent: "2015-04-08T14:37:48Z",
IsDeliveryReceiptRequested: null,
IsReadReceiptRequested: false,
IsDraft: false,
IsRead: false,
WebLink: "<snipped>",
MeetingMessageType: "MeetingRequest",
Event#odata.navigationLink: "https://outlook.office365.com/api/v1.0/Users('<snipped>')/Events('<snipped>')"
}
Please let me know if our proposed changes meet your requirements, if you have any questions or need more info.
Thanks,
Venkat