i am using Websphere MQ 7.1. I want to create pub/sub and i need to define a topic
like "DEPARTMENT" with following structure
DEPARTMENT
---> SUBJECT1
---> SUBJECT2
|===> Minor1
eg I define the first one like this
define TOPIC(DEPARTMENT) TOPICSTR('SUBJECT1')
but i hit error when i try to define subject2
define TOPIC(DEPARTMENT) TOPICSTR('SUBJECT2')
it says "Object already exists". How to remedy. thanks
TOPIC objects are unique. Hence the same topic object can't defined again. Topic objects are to be used for administration and topic strings for publish messages and subscribing to publications. As you are using the same DEPARTMENT object name again to define a another topic, you are getting the error.
You can do it this way:
define TOPIC(DEPSUB1) TOPICSTR('DEPARTMENT/SUBJECT1')
define TOPIC(DEPSUB2) TOPICSTR('DEPARTMENT/SUBJECT2')
define TOPIC(DEPSUB3) TOPICSTR('DEPARTMENT/SUBJECT2/Minor1')
Later for receiving publications you can use the following sample topic strings.
"#" -> Receive all publications
"DEPARTMENT/#" -> Every publication under 'DEPARTMENT' topic
"DEPARTMENT/+/Minor1" -> All publications on 'Minor1' irrespective of SUBJECTs.
Related
Based on my previous question, I'm trying to fetch messages from a particular folder like Deleted Items.
I am following this document to achieve the above scenario: https://learn.microsoft.com/en-us/graph/api/user-list-messages
GET https://graph.microsoft.com/v1.0/me/mailFolders/deleteditems/messages
Using the above query, I'm getting all deleted messages with a lot of information(html code) that I don't want.
I want to customize the response by retrieving only particular attributes like subject, importance, sender, sentDateTime, receiver, receivedDateTime.
I tried to query something like below using $select:
GET https://graph.microsoft.com/v1.0/me/mailFolders/deleteditems/messages?$select= subject, importance, sender, sentDateTime, receiver, receivedDateTime.
But I'm getting 400:BadRequest error like below:
{
"error": {
"code": "RequestBroker--ParseUri",
"message": "Could not find a property named 'receiver' on type 'Microsoft.OutlookServices.Message'.",
"request_id": "54f9adf-7435-5r8c-a3g6-48gx6343ac",
"date": "2022-05-24T07:35:06"
}
}
How to include receiver details along with sender details???
I tried to reproduce the same in my environment and got the same error.
As I already mentioned in the comment, there is no such attribute named 'receiver'. To resolve the error, try removing that receiver in the query and check the response.
If you want to include receiver details along with sender details, you can try including toRecipients that gives info about receiver like below as an alternative:
GET https://graph.microsoft.com/v1.0/me/mailFolders/deleteditems/messages?$select=subject,importance,sender,sentDateTime,receivedDateTime,toRecipients
Response:
UPDATE:
As #Dmitry Streblechenko mentioned, this only works when you are the only receiver of those messages. If there are multiple recipients, take a while to know MAPI properties and OutlookSpy as he suggested.
Firstly, there is no receiver property. Since the message comes from a mailbox that you explicitly connect to, wouldn't receiver be the mailbox owner? Unless the message was dragged from another mailbox in Outlook.
Note that you can always request any MAPI property in your Graph query. In your particular case, you probably want PR_RECEIVED_BY_NAME / PR_RECEIVED_BY_EMAIL_ADDRESS / PidTagReceivedRepresentingSmtpAddress MAPI properties. To retrieve PidTagReceivedRepresentingSmtpAddress property, use
?$expand=singleValueExtendedProperties($filter=id eq 'String 0x5D08')
You can see available MAPI properties as well as construct a Graph query that requests them in OutlookSpy (I am its author) - click IMessage button to see all available MAPI propetrties of a selected message or click Message (Graph) | Query Parameters | Expand.
I'm using kafka-streams:2.7.0 in my Springboot application. I have to join two KStreams and change internal topic name.
KStream messageToRetry=messageStream.join(
keyToRetryStream,
documentJoiner,
JoinWindows.of(Duration.ofMinutes(30)),
StreamJoined.with(Serdes.String(), docSerde, docSerde).withStoreName("test123"))
Internal topics follows the naming convention:${applicationId}-<storename>-changelog
So my Auto-generated topic name is :
kafka-streams-dev-KSTREAM-JOINTHIS-0000000004-store-changelog
I'm using StreamJoined.withStoreName(„test123”) to change store name and I'm expected sth like:
kafka-streams-dev-test123-store-changelog
But I've got:
kafka-streams-dev-test123-other-join-store-changelog
Is StreamJoined.withStoreName() right method to change the store name? Is there any documentation which explain when other-join suffix is added?
The answer is in javaDoc: https://kafka.apache.org/27/javadoc/org/apache/kafka/streams/kstream/StreamJoined.html#withStoreName-java.lang.String-
The name for the stores will be ${applicationId}--this-join and ${applicationId}--other-join.
However the name of the store depends on which join we are using (inner-other-join/outer-other-join).
I am trying to get all channels associated with a specific team so that my bot can send proactive messages. Based on the reading I've done, I need to use the FetchChannelList method in the Microsoft.Bot.Connector.Teams namespace, in the TeamsOperationsExtensions class.
If I do this:
var connector = new ConnectorClient(new Uri(activity.ServiceUrl));
ConversationList channels = connector.GetTeamsConnectorClient().Teams.FetchChannelList(activity.GetChannelData<TeamsChannelData>().Team.Id);
channels is null. If I break it down to only connector.GetTeamsConnectorClient(), that is not null, but connector.GetTeamsConnectorClient().Teams.FetchChannelList(activity.GetChannelData().Team.Id) is.
To break it down further, I tried to get activity.GetChannelData(). Only the Tenant property is not null. All the others (Channel, Team, EventType and Notification) are null.
I am using tunnelrelay, which forwards messages sent to the bot's public endpoint to a private endpoint, and am using tenant filter authentication in the messages controller. Not sure if that would cause any problems? (When I watch messages coming in through tunnel relay, I see there too that only Tenant is the only channeldata property which is not null. Here's what I see in tunnelrelay:
"entities":[{"locale":"en- US","country":"US","platform":"Windows","type":"clientInfo"}],"channelData":{"tenant":{"id":"our_tenant_id"}}}
Also, regarding the teamID expected as a parameter to the FetchChannelList method, how do I find out what that is for a given team other than the GetChannelData() method? I tried the powershell cmdlet Get-Team (for example: Get-Team -User me#abc.com). It returns a distinct groupId for each team I am a part of, but I'm assuming groupId != TeamId. Is that correct? And, where can I find the teamId that the FetchChannelList is expecting other than the GetChannelData method?
Thanks in advance for any help!
The problem here was that the message to the bot (the activity) was a direct message, not a part of a channel conversation. Apparently, the Channel and Team properties are only available in a channel conversation.
Also, regarding the team ID, one way to get it outside of code is to click the "..." next to the team and click "get link to team". You will see something like:
https://teams.microsoft.com/l/team/19%3a813345c7fafe437e8737057505224dc3%40thread.skype/conversations?groupId=Some_GUID&tenantId=Some_GUID
The line after team/ (19%3a813345c7fafe437e871111115934th3%40thread.skype) contains the teamId, but not exactly. If you replace the first % and the two characters immediately following it with : and the second % and the two characters immediately following it with #, that is your teamid. So, from:
19%3a813345c7fafe437e871111115934th3%40thread.skype
the team ID is:
19:813345c7fafe437e871111115934th3#thread.skype
I have a problem with Confluentinc connector.
When you creating the connector you need to specify topic (elasticsearch index) and type (document type in ES).
{
"name": "test-connector",
"config": {
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"tasks.max": "1",
"topics": "test",
"key.ignore": "false",
"schema.ignore": "false",
"connection.url": "http://elastic:9200",
"type.name": "type1",
"name": "elasticsearch-sink"
}
}
I want to publish in the same index (kafka topic) but to different types, is it possible?
I've tried to create multiple connectors, but the problem, that it every connector consume the message, because it's a same topic.
I've tried to create connector on the fly with specific type, publish there and then remove connector. But sometimes it removing too early and not all messages are consumed (didn't appear in elastic). Also when i'm removing connector and creating another one with other document type, this new connector consumes some of old messages.
Does anyone have an idea how to manage this?
Each connector can route messages to one type. You can use Single Message Transform to route messages to different indices, but this is not what you want.
What I would recommend is use stream processing to split the messages into different topics. Each topic is then streamed by a different connector to the same index but different type as required.
To do the stream processing you could use something like Kafka Streams, Spark Streaming, etc etc. There's also KSQL, which would let you do something like this:
CREATE STREAM FOO_TYPE_A AS SELECT * FROM FOO WHERE TYPE='A';
CREATE STREAM FOO_TYPE_B AS SELECT * FROM FOO WHERE TYPE='B';
CREATE STREAM FOO_TYPE_C AS SELECT * FROM FOO WHERE TYPE='C';
You then have three topics (FOO_TYPE_A, FOO_TYPE_B, FOO_TYPE_C) that you create three connectors for, streaming to index FOO but with different types.
Disclaimer: I work for Confluent, the company behind the open-source KSQL project.
i found a solution, unfortunately it is deprecated. If you know something better, please let me know.
From official docs:
topic.index.map
This option is now deprecated. A future version may remove it completely. Please use single message transforms, such as RegexRouter, to map topic names to index names.
A map from Kafka topic name to the destination Elasticsearch index, represented as a list of topic:index pairs.
Type: list
Default: ""
Importance: low
so i've created connector like that:
{
"name": "test-connector-old",
"config": {
.....
"topics": "old",
"topic.index.map": "old:test",
....
}
}
now i can push to topic "old" and it will index elasticsearch "test" index
then i've created more connectors, and using "topic.index.map": "TOPIC_NAME:test", i could index different types on the same index
in future versions it will be topic => index. Confluent team, please don't remove topic.index.map, or find better solution for this case
Thank you!
I'm trying to write a Casacading(v1.2) casade (http://docs.cascading.org/cascading/1.2/userguide/htmlsingle/#N20844) consisting of two flows:
1) The first flow outputs urls to a db table, (in which they are automatically assigned id's via an auto-incrementing id value).
This flow also outputs pairs of urls into a SequenceFile with field names "urlTo", "urlFrom".
2) The second flow reads from both these sources and tries to do a CoGroup on "urlTo" (from the SequenceFile) and "url" (from the db source) to get the db record "id" for each "urlTo".
It then does a CoGroup on "urlFrom" and "url" to get the db record "id" for each "urlFrom".
The two flows work individually - if I call flow.complete() on the first before running the second flow. But if I put the two flows in a cascade object I get the error
cascading.cascade.CascadeException: no loops allowed in cascade, flow: urlLink*url*url, source: JDBCTap{connectionUrl='jdbc:mysql://localhost:3306/mydb', driverClassName='com.mysql.jdbc.Driver', tableDesc=TableDesc{tableName='urls', columnNames=null, columnDefs=null, primaryKeys=null}}, sink: JDBCTap{connectionUrl='jdbc:mysql://localhost:3306/mydb', driverClassName='com.mysql.jdbc.Driver', tableDesc=TableDesc{tableName='url_link', columnNames=[urlLinkFrom, urlLinkTo], columnDefs=[bigint(20), bigint(20)], primaryKeys=[urlLinkFrom, urlLinkTo]}}
on trying to configure the cascade.
I can see it's coming from the addEdgeFor function of the CascadeConnector but I'm not clear on how to resolve this problem.
I've never used Cascade / CascadeConnector before. Is there something I'm missing?
It seems like your some paths for source and sinks are the same.
A Cascade uses the concept of Direct Graphs to build the Cascade itself so if you have a flow source and a sink source pointing to the same location that in essence creates a loop and is disallowed in the concept of Directed Graphs since
it does not go from:
Source Location A to Sink Location B
but instead goes from:
Source Location A to Sink Location A.
"A Tap is not given an explicit name by design. This is so a given Tap instance can be re-used in different {#link Flow}s that may expect a source or sink by a different logical name, but are the same physical resource."
"In general, two instances of the same Tap class must have differing Identifiers (and different #equals)."
It turns out that JDBCTaps generate their identifier from the connection url alone (and do not include the table name). So as I was reading from one table and writing to a different table in the same database it seemed like I was reading from and writing to the same Tap and causing a loop.
As a work-around, I'm going to subclass the JDBCTap and override the getIdentifier() method to include the table name.