jedis src code MasterListener method run:
j.subscribe(new JedisPubSub() {
#Override
public void onMessage(String channel, String message) {
log.fine("Sentinel " + host + ":" + port + " published: " + message + ".");
String[] switchMasterMsg = message.split(" ");
if (switchMasterMsg.length > 3) {
if (masterName.equals(switchMasterMsg[0])) {
initPool(toHostAndPort(Arrays.asList(switchMasterMsg[3], switchMasterMsg[4])));
} else {
log.fine("Ignoring message on +switch-master for master name "
+ switchMasterMsg[0] + ", our master name is " + masterName);
}
} else {
log.severe("Invalid message received on Sentinel " + host + ":" + port
+ " on channel +switch-master: " + message);
}
}
}, "+switch-master");
if there has three Sentinels,so created three MasterListener.When failover happens,jedis client will initPool three times for each MasterListener.
The question is:why not just initPool once? When sentinel objective offline master,then jedis client receive message to re-initPool?
I think synchronization is needed at this line "if (!master.equals(currentHostMaster)) {" to prevent create connection pool multiple times.
public class JedisSentinelPool extends JedisPoolAbstract {
private volatile HostAndPort currentHostMaster;
private void initPool(HostAndPort master) {
if (!master.equals(currentHostMaster)) {
currentHostMaster = master;
if (factory == null) {
}
Related
I have a large amount of data that I want to purge from the database, there are about 6 tables of which 3 have a many to many relationship with cascadeType. All the others are log and history tables independent of the 3 others
i want to purge this data one by one and if any of them have error while deleting i have to undo only the current record and show it in console and keep deleting the others
I am trying to use transactional annotation with springboot but all purging stops if an error occurs
how to manage this kind of need?
here is what i did :
#Transactional
private void purgeCards(List<CardEntity> cardsTobePurge) {
List<Long> nextCardsNumberToUpdate = getNextCardsWhichWillNotBePurge(cardsTobePurge);
TransactionTemplate lTransTemplate = new TransactionTemplate(transactionManager);
lTransTemplate.setPropagationBehavior(TransactionTemplate.PROPAGATION_REQUIRED);
lTransTemplate.execute(new TransactionCallback<Object>() {
#Override
public Object doInTransaction(TransactionStatus status) {
cardsTobePurge.forEach(cardTobePurge -> {
Long nextCardNumberOfCurrent = cardTobePurge.getNextCard();
if (nextCardsNumberToUpdate.contains(nextCardNumberOfCurrent)) {
CardEntity cardToUnlik = cardRepository.findByCardNumber(nextCardNumberOfCurrent);
unLink(cardToUnlik);
}
log.info(BATCH_TITLE + " Removing card Number : " + cardTobePurge.getCardNumber() + " with Id : "
+ cardTobePurge.getId());
List<CardHistoryEntity> historyEntitiesOfThisCard = cardHistoryRepository.findByCard(cardTobePurge);
List<LogCreationCardEntity> logCreationEntitiesForThisCard = logCreationCardRepository
.findByCardNumber(cardTobePurge.getCardNumber());
List<LogCustomerMergeEntity> logCustomerMergeEntitiesForThisCard = logCustomerMergeRepository
.findByCard(cardTobePurge);
cardHistoryRepository.deleteAll(historyEntitiesOfThisCard);
logCreationCardRepository.deleteAll(logCreationEntitiesForThisCard);
logCustomerMergeRepository.deleteAll(logCustomerMergeEntitiesForThisCard);
cardRepository.delete(cardTobePurge);
});
return Boolean.TRUE;
}
});
}
As a solution to my question:
I worked with TransactionTemplate to be able to manage transactions manually
so if an exception is raised a rollback will only be applied for the current iteration and will continue to process other cards
private void purgeCards(List<CardEntity> cardsTobePurge) {
int[] counter = { 0 }; //to simulate the exception
List<Long> nextCardsNumberToUpdate = findNextCardsWhichWillNotBePurge(cardsTobePurge);
cardsTobePurge.forEach(cardTobePurge -> {
Long nextCardNumberOfCurrent = cardTobePurge.getNextCard();
CardEntity cardToUnlik = null;
counter[0]++; //to simulate the exception
if (nextCardsNumberToUpdate.contains(nextCardNumberOfCurrent)) {
cardToUnlik = cardRepository.findByCardNumber(nextCardNumberOfCurrent);
}
purgeCard(cardTobePurge, nextCardsNumberToUpdate, cardToUnlik, counter);
});
}
private void purgeCard(#NonNull CardEntity cardToPurge, List<Long> nextCardsNumberToUpdate, CardEntity cardToUnlik,
int[] counter) {
TransactionTemplate lTransTemplate = new TransactionTemplate(transactionManager);
lTransTemplate.setPropagationBehavior(TransactionTemplate.PROPAGATION_REQUIRED);
lTransTemplate.execute(new TransactionCallbackWithoutResult() {
#Override
public void doInTransactionWithoutResult(TransactionStatus status) {
try {
if (cardToUnlik != null)
unLink(cardToUnlik);
log.info(BATCH_TITLE + " Removing card Number : " + cardToPurge.getCardNumber() + " with Id : "
+ cardToPurge.getId());
List<CardHistoryEntity> historyEntitiesOfThisCard = cardHistoryRepository.findByCard(cardToPurge);
List<LogCreationCardEntity> logCreationEntitiesForThisCard = logCreationCardRepository
.findByCardNumber(cardToPurge.getCardNumber());
List<LogCustomerMergeEntity> logCustomerMergeEntitiesForThisCard = logCustomerMergeRepository
.findByCard(cardToPurge);
cardHistoryRepository.deleteAll(historyEntitiesOfThisCard);
logCreationCardRepository.deleteAll(logCreationEntitiesForThisCard);
logCustomerMergeRepository.deleteAll(logCustomerMergeEntitiesForThisCard);
cardRepository.delete(cardToPurge);
if (counter[0] == 2)//to simulate the exception
throw new Exception();//to simulate the exception
} catch (Exception e) {
status.setRollbackOnly();
if (cardToPurge != null)
log.error(BATCH_TITLE + " Problem with card Number : " + cardToPurge.getCardNumber()
+ " with Id : " + cardToPurge.getId(), e);
else
log.error(BATCH_TITLE + "Card entity is null", e);
}
}
});
}
I have a crawl function that also checks whether the content contains the param. If it contains I will write that to the database. How can I use the following code as a Read Job for the spring batch?
public void crawl(String baseUrl, String url, String postgresParam) {
if (!urls.contains(url) && url.startsWith(baseUrl)) {
// System.out.println(">> count: " + count + " [" + url + "]");
urls.add(url);
try {
Connection connection = Jsoup.connect(url).userAgent(USER_AGENT);
Document htmlDocument = connection.get();
Elements linksOnPage = htmlDocument.select("a[href]");
bodyContent = htmlDocument.body().text();
String title = htmlDocument.title();
searchParameters(url, title);
// count++;
for (Element link : linksOnPage) {
crawl(baseUrl, link.absUrl("href"), postgresParam);
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
private void searchParameters(String URL, String title) {
for (String param : postgresParamArray) {
if (bodyContent.toLowerCase().contains(param.toLowerCase())) {
System.out.println(">>>>>> Found: " + " [" + param + "]" + " [" + URL + "]" + " [" + title + "]");
}
}
}
I have written java code which is used to send sms using sms gateway. JAVA OpenSMPP API is used to implement the logic for sending SMPP request. I have needed below information which is used to connect to the sms gateway and which is used to send sms :
SMS_GATEWAY_USERNAME 906o2portt02
SMS_GATEWAY_PORT 9205 S
SMS_GATEWAY_IP_2 34.22.91.166
SMS_GATEWAY_IP_1 80.77.67.145
I am able to send sms but i dont understand for what reason i am not receiving sms. I have also put the debug statement in my code to check for any error. When i checked the log file i am getting below information which says sms has been send. Previously I had different port number,username and password and i was able to send and recieve sms using the same java code. But now i have the requirement to send sms on this gateway and its also sending an sms. but for some reason i am not receiving sms. Is there any way to check what happened to my sms which has been send ?
Below is my code :
public class SMSClient
{
private static final Logger logger = LoggerFactory
.getLogger(SMSClient.class);
#Autowired
private SMSSettings smsSettings;
#Autowired
private OracleSettings oracleSettings;
/**
* If the application is bound to the SMSC.
*/
boolean _bound = false;
public boolean send(String text,
List<AlertCommunicationAddress> toAddressesSMS)
{
List<String> toAddressesSMSString = new ArrayList<String>();
for (AlertCommunicationAddress alertComAddr : toAddressesSMS)
{
List<AlertRecpGrpMember> recpMembers = alertComAddr
.getAlertRecipientsGroup().getAlertRecpGrpMembers();
for (AlertRecpGrpMember recpMem : recpMembers)
{
// check here if the member belongs to the same environment on
// which SMS is being sent.
if ((recpMem.getIsDefault() != null && recpMem.getIsDefault()
.equalsIgnoreCase("Y"))
|| (recpMem.getRunEnvironment() != null && recpMem
.getRunEnvironment().equalsIgnoreCase(
oracleSettings.getRunEnv())))
{
toAddressesSMSString.add(recpMem.getMember());
}
}
}
logger.debug("Original SMS to be sent : " + text);
String smscHost1 = smsSettings.getHost1();
Integer smscPort = smsSettings.getPort();
if (toAddressesSMSString.isEmpty())
{
return false;
}
for (String phoneNumber : toAddressesSMSString)
{
try
{
Session session = getSession(smscHost1, smscPort,
smsSettings.getUsername(), smsSettings.getPassword());
if (session == null)
{
String smscHost2 = smsSettings.getHost2();
logger.error("SMS --- Unable to get the session with Host 1 (" + smscHost1 + ":" + smscPort + ") , will try Host 2 (" + smscHost2 + ") now.");
session = getSession(smscHost2, smscPort,
smsSettings.getUsername(),
smsSettings.getPassword());
if (session == null)
{
logger.error("SMS --- Unable to get the session with Host 1 (" + smscHost1 + ") and Host 2 (" + smscHost2 + "). Please check with the SMS Gateway.");
return false;
}
}
logger.debug("SMS --- Created Session object " + session);
SubmitSM request = new SubmitSM();
request.setSourceAddr(new Address((byte) 5, (byte) 0,
"RM2Support"));
request.setDestAddr(createAddress(phoneNumber));
request.setProtocolId((byte) 0);
request.setPriorityFlag((byte) 0);
request.setRegisteredDelivery((byte) 1); // we want delivery
// reports
request.setDataCoding((byte) 0);
request.setSmDefaultMsgId((byte) 0);
// request.setScheduleDeliveryTime(deliveryTime); // you can
// skip
// this
request.setReplaceIfPresentFlag((byte) 0);
// Send the request
request.assignSequenceNumber(true);
// this is to send long messages
request.setEsmClass((byte) Data.SM_UDH_GSM);
String[] splittedMsg = splitMessage(text, 153);
int totalSegments = splittedMsg.length;
logger.debug("SMS : Number of splitted segments :: "
+ totalSegments);
// iterating on splittedMsg array. Only Sequence Number and
// short
// message text will change each time
Random random = new Random();
int randomInt = random.nextInt();
logger.debug("SMS---- Reference Number : " + randomInt);
for (int i = 0; i < totalSegments; i++)
{
ByteBuffer ed = new ByteBuffer();
ed.appendByte((byte) 5); // UDH Length
ed.appendByte((byte) 0x00); // IE Identifier
ed.appendByte((byte) 3); // IE Data Length
ed.appendByte((byte) randomInt); // Reference Number
ed.appendByte((byte) totalSegments); // Number of pieces
ed.appendByte((byte) (i + 1)); // Sequence number
ed.appendString(splittedMsg[i], Data.ENC_ASCII);
request.setShortMessageData(ed);
logger.debug("Hello...reached here...now about the submit the request::::");
SubmitSMResp response = session.submit(request);
logger.debug("SMS --- Submit response "
+ response.getCommandStatus());
// response = smsSession.submitMulti(request);
logger.debug("SMS --- Submit response "
+ response.getCommandStatus());
String messageId = response.getMessageId();
logger.debug("SMS --- Message ID = " + messageId);
}
enquireLink(session);
unbind(session);
} catch (Exception e)
{
logger.debug("Exception while sending SMS with Phone number :::" + phoneNumber + "::::" + e);
continue;
}
}
return true;
}
private Session getSession(String smscHost, int smscPort,
String smscUsername, String smscPassword) throws Exception
{
try
{
TCPIPConnection connection = new TCPIPConnection(smscHost, smscPort);
connection.setReceiveTimeout(6000);
connection.setIOBufferSize(8188);
connection.setReceiveBufferSize(8188);
Session session = new Session(connection);
// bind now
if (_bound)
{
logger.debug("Already bound, unbind first.");
return session;
}
BindRequest request = new BindTransmitter();
request.setSystemId(smscUsername);
request.setPassword(smscPassword);
// request.setSystemType(systemType);
// request.setAddressRange(addressRange);
request.setInterfaceVersion((byte) 0x34); // SMPP protocol version
logger.debug("SMS --- Bind request :: " + request.debugString());
logger.debug("SMS --- Created Session object :: " + session);
BindResponse response = session.bind(request);
logger.debug("Bind response " + response.debugString());
if (response.getCommandStatus() == Data.ESME_ROK)
{
logger.debug("SMS --- Binded with SMSC Server");
_bound = true;
} else
{
logger.error("SMS --- Unable to bind with SMSC Server :: Code :: "
+ response.getCommandStatus());
}
Integer respCode = new Integer(response.getCommandStatus());
logger.debug("SMS -- Response Code ::" + respCode);
response.setCommandStatus(respCode);
Integer comLength = new Integer(response.getCommandLength());
logger.debug("SMS -- CommandLength ::" + comLength);
response.setCommandLength(comLength);
logger.debug("SMS --- Response from SMSC" + response.toString());
return session;
} catch (WrongLengthOfStringException e)
{
logger.error("SMS -- Wrong length string exception"
+ e.getMessage());
} catch (ValueNotSetException e)
{
logger.error("SMS -- Value not set exception" + e.getMessage());
} catch (TimeoutException e)
{
logger.error("SMS -- Timeout exception " + e.getMessage());
} catch (PDUException e)
{
logger.error("SMS -- PDU exception " + e.getMessage());
} catch (WrongSessionStateException e)
{
logger.error("SMS -- Wrong Session exception " + e.getMessage());
} catch (IOException e)
{
logger.error("SMS --- Could not able to connect the host/port or Check the Username/Password for connection ::"
+ e.getMessage());
} catch (Exception e)
{
logger.error("SMS -- Error while sending SMS :: " + e.getMessage());
}
return null;
}
private Address createAddress(String address)
throws WrongLengthOfStringException
{
Address addressInst = new Address();
addressInst.setTon((byte) 5); // national ton
addressInst.setNpi((byte) 0); // numeric plan indicator
addressInst.setAddress(address, Data.SM_ADDR_LEN);
logger.debug("SMS -------- Address :: " + addressInst);
return addressInst;
}
private Session unbind(Session session)
{
try
{
if (!_bound)
{
System.out.println("Not bound, cannot unbind.");
return session;
}
// send the request
logger.debug("Going to unbind.");
if (session.getReceiver().isReceiver())
{
logger.debug("SMS --- Unbinding --- It can take a while to stop the receiver.");
}
UnbindResp response = session.unbind();
logger.debug("Unbind response " + response.debugString());
_bound = false;
} catch (Exception e)
{
logger.debug("Unbind operation failed. " + e);
}
return session;
}
/**
* Creates a new instance of <code>EnquireSM</code> class. This PDU is used
* to check that application level of the other party is alive. It can be
* sent both by SMSC and ESME.
*
* See "SMPP Protocol Specification 3.4, 4.11 ENQUIRE_LINK Operation."
*
* #see Session#enquireLink(EnquireLink)
* #see EnquireLink
* #see EnquireLinkResp
*/
private void enquireLink(Session session)
{
try
{
EnquireLink request = new EnquireLink();
EnquireLinkResp response;
logger.debug("SMS ---- Enquire Link request "
+ request.debugString());
response = session.enquireLink(request);
logger.debug("SMS --- Enquire Link response "
+ response.debugString());
} catch (Exception e)
{
logger.debug("SMS ---- Enquire Link operation failed :: " + e);
}
}
private String[] splitMessage(String s, int size)
{
if (s == null || size <= 0)
return null;
int chunks = s.length() / size + ((s.length() % size > 0) ? 1 : 0);
String[] arr = new String[chunks];
for (int i = 0, j = 0, l = s.length(); i < l; i += size, j++)
arr[j] = s.substring(i, Math.min(l, i + size));
return arr;
}
}
Below is the parameter i need to consider while send/receive sms. But i really dont know whetherJAVA OpenSMPP APIuses this settings:
You can use the following code to query SMPP server to check what happend to you message (from https://github.com/OpenSmpp/opensmpp/blob/master/client/src/main/java/org/smpp/test/SMPPTest.java):
/**
* Creates a new instance of <code>QuerySM</code> class, lets you set
* subset of fields of it. This PDU is used to fetch information
* about status of already submitted message providing that you 'remember'
* message id of the submitted message. The message id is assigned
* by SMSC and is returned to you with the response to the submision
* PDU (SubmitSM, DataSM etc.).
*
* See "SMPP Protocol Specification 3.4, 4.8 QUERY_SM Operation."
* #see Session#query(QuerySM)
* #see QuerySM
* #see QuerySMResp
*/
private void query() {
debug.enter(this, "SMPPTest.query()");
try {
QuerySM request = new QuerySM();
QuerySMResp response;
// input values
messageId = getParam("Message id", messageId);
sourceAddress = getAddress("Source", sourceAddress);
// set values
request.setMessageId(messageId);
request.setSourceAddr(sourceAddress);
// send the request
System.out.println("Query request " + request.debugString());
if (asynchronous) {
session.query(request);
} else {
response = session.query(request);
System.out.println("Query response " + response.debugString());
messageId = response.getMessageId();
}
} catch (Exception e) {
event.write(e, "");
debug.write("Query operation failed. " + e);
System.out.println("Query operation failed. " + e);
} finally {
debug.exit(this);
}
}
Community, could you please help me to understand why ~3% of my messages don't end up in HDFS? I wrote a simple producer in JAVA to generate 10 million messages.
public static final String TEST_SCHEMA = "{"
+ "\"type\":\"record\","
+ "\"name\":\"myrecord\","
+ "\"fields\":["
+ " { \"name\":\"str1\", \"type\":\"string\" },"
+ " { \"name\":\"str2\", \"type\":\"string\" },"
+ " { \"name\":\"int1\", \"type\":\"int\" }"
+ "]}";
public KafkaProducerWrapper(String topic) throws UnknownHostException {
// store topic name
this.topic = topic;
// initialize kafka producer
Properties config = new Properties();
config.put("client.id", InetAddress.getLocalHost().getHostName());
config.put("bootstrap.servers", "myserver-1:9092");
config.put("key.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
config.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
config.put("schema.registry.url", "http://myserver-1:8089");
config.put("acks", "all");
producer = new KafkaProducer(config);
// parse schema
Schema.Parser parser = new Schema.Parser();
schema = parser.parse(TEST_SCHEMA);
}
public void send() {
// generate key
int key = (int) (Math.random() * 20);
// generate record
GenericData.Record r = new GenericData.Record(schema);
r.put("str1", "text" + key);
r.put("str2", "text2" + key);
r.put("int1", key);
final ProducerRecord<String, GenericRecord> record = new ProducerRecord<>(topic, "K" + key, (GenericRecord) r);
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if (e != null) {
logger.error("Send failed for record {}", record, e);
messageErrorCounter++;
return;
}
logger.debug("Send succeeded for record {}", record);
messageCounter++;
}
});
}
public String getStats() { return "Messages sent: " + messageCounter + "/" + messageErrorCounter; }
public long getMessageCounter() {
return messageCounter + messageErrorCounter;
}
public void close() {
producer.close();
}
public static void main(String[] args) throws InterruptedException, UnknownHostException {
// initialize kafka producer
KafkaProducerWrapper kafkaProducerWrapper = new KafkaProducerWrapper("my-test-topic");
long max = 10000000L;
for (long i = 0; i < max; i++) {
kafkaProducerWrapper.send();
}
logger.info("producer-demo sent all messages");
while (kafkaProducerWrapper.getMessageCounter() < max)
{
logger.info(kafkaProducerWrapper.getStats());
Thread.sleep(2000);
}
logger.info(kafkaProducerWrapper.getStats());
kafkaProducerWrapper.close();
}
And I use the Confluent HDFS Connector in standalone mode to write data to HDFS. The configuration is as follows:
name=hdfs-consumer-test
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=my-test-topic
hdfs.url=hdfs://my-cluster/kafka-test
hadoop.conf.dir=/etc/hadoop/conf/
flush.size=100000
rotate.interval.ms=20000
# increase timeouts to avoid CommitFailedException
consumer.session.timeout.ms=300000
consumer.request.timeout.ms=310000
heartbeat.interval.ms= 60000
session.timeout.ms= 100000
The connector writes the data into HDFS, but after waiting for 20000 ms (due to rotate.interval.ms) not all messages are received.
scala> spark.read.avro("/kafka-test/topics/my-test-topic/partition=*/my-test-topic*")
.count()
res0: Long = 9749015
Any idea what is the reason for this behavior? Where is my mistake? I'm using Confluent 3.0.1/Kafka 10.0.0.1.
Are you seeing the last few messages are not moved to HDFS? If so, it's likely you are running into the issue described here https://github.com/confluentinc/kafka-connect-hdfs/pull/100
Try sending one more message to the topic after the rotate.interval.ms has expired to validate this is what you are running into. If you need to rotate based on time, it's probably a good idea to upgrade to pickup the fix.
The latest Android Wear update comes with support for ChannelApi that can be used for sending files to/from wearable or handheld. The problem is I cannot find a single sample of how to use this functionality. The Android samples doesn't include this feature. So if anyone knows how to use the sendFile/receiveFile and can give a quick example here it would be appreciated.
Take a look on this answer to know how to use the Channel API to create the channel between the devices.
After you create the googleClient and retrive the nodeId of the device you want to send the file to, basically you can use the following code on the wearable side:
//opening channel
ChannelApi.OpenChannelResult result = Wearable.ChannelApi.openChannel(googleClient, nodeId, "/mypath").await();
channel = result.getChannel();
//sending file
channel.sendFile(googleClient, Uri.fromFile(file));
Then, on the handheld device:
//receiving the file
#Override
public void onChannelOpened(Channel channel) {
if (channel.getPath().equals("/mypath")) {
file = new File("/sdcard/file.txt");
try {
file.createNewFile();
} catch (IOException e) {
//handle error
}
channel.receiveFile(mGoogleApiClient, Uri.fromFile(file), false);
}
}
//when file is ready
#Override
public void onInputClosed(Channel channel, int i, int i1) {
MainActivity.this.runOnUiThread(new Runnable() {
public void run() {
Toast.makeText(MainActivity.this, "File received!", Toast.LENGTH_SHORT).show();
}
});
}
If you need more information about this, please visit the reference site from Google
This is just an add to the answer: also check your WearableListenerService in androidmanifest. It's intent filter should contain the com.google.android.gms.wearable.CHANNEL_EVENT action.
I have used some code like this with success. Transfers can be fairly slow.
Both the handheld and wearable applications MUST HAVE the same applicationId in their gradle files.
The wearable needs a manifest entry something like this
<service
android:name="com.me.myWearableListenerService"
android:enabled="true"
android:exported="true">
<intent-filter>
<!-- listeners receive events that match the action and data filters -->
<action android:name="com.google.android.gms.wearable.CHANNEL_EVENT"/>
<data android:scheme="wear" android:host="*" android:pathPrefix="/MyAppPath" />
</intent-filter>
</service>
to launch its WearableListenerService when the Handheld sends a file.
private static final String WEARABLE_FILE_COPY = "MyAppPath/FileCopy";
private void copyFileToWearable (final File file, final String nodeId, Context ctx) {
new Thread(new Runnable() {
#Override
public void run() {
final ChannelClient cc = Wearable.getChannelClient(ctx);
ChannelClient.ChannelCallback ccb = new ChannelClient.ChannelCallback() {
#Override
public void onChannelClosed(#NonNull ChannelClient.Channel channel, int i, int i1) {
super.onChannelClosed(channel, i, i1);
Log.d(TAG, "copyFileToWearable " + channel.getNodeId() + " onChannelClosed ");
cc.unregisterChannelCallback(this);
}
#Override
public void onOutputClosed(#NonNull ChannelClient.Channel channel, int i, int i1) {
super.onOutputClosed(channel, i, i1);
Log.d(TAG, "copyFileToWearable " + channel.getNodeId() + " onOutputClosed ");
cc.unregisterChannelCallback(this);
// this is transfer success callback ...
}
};
ChannelClient.Channel c;
Log.d(TAG, "copyFileToWearable transfer file " + file.getName() +
" size:" + file.length()/1000000 + "Mb");
try {
// send the filename to the wearable with the channel open
c = Tasks.await(cc.openChannel(nodeId, WEARABLE_FILE_COPY + "/" + file.getName()));
Log.d(TAG, "copyFileToWearable channel opened to " + nodeId);
Log.d(TAG, "copyFileToWearable register callback");
Tasks.await(cc.registerChannelCallback(c, ccb));
Log.d(TAG, "copyFileToWearable sending file " + file.getName());
Tasks.await(cc.sendFile(c, Uri.fromFile(file)));
// completion is indicated by onOutputClosed
} catch (Exception e) {
Log.w(TAG, "copyFileToWearable exception " + e.getMessage());
cc.unregisterChannelCallback(ccb);
// failure
}
}
}).start();
}
call this from onChannelOpened in a WearableListenerService when c.getPath() starts with WEARABLE_FILE_COPY
private void receiveFileFromHandheld(final ChannelClient.Channel c, File myStorageLocation, Context ctx) {
// filename sent by the handheld is at the end of the path
String[] bits = c.getPath().split("\\/");
// store in a suitable spot
final String receivedFileName = myStorageLocation.getAbsolutePath() + "/" + bits[bits.length-1];
new Thread(new Runnable() {
#Override
public void run() {
final ChannelClient cc = Wearable.getChannelClient(ctx);
ChannelClient.ChannelCallback ccb = new ChannelClient.ChannelCallback() {
boolean mClosed = false;
#Override
public void onChannelClosed(#NonNull ChannelClient.Channel channel, int i, int i1) {
super.onChannelClosed(channel, i, i1);
Log.d(TAG, "receiveFileFromHandheld " + channel.getNodeId() + " onChannelClosed ");
if (!mClosed){
// failure ...
}
}
#Override
public void onInputClosed(#NonNull ChannelClient.Channel channel, int i, int i1) {
super.onInputClosed(channel, i, i1);
Log.d(TAG, "receiveFileFromHandheld " + channel.getNodeId() + " onInputClosed ");
long fs = new File(receivedFileName).length();
Log.d(TAG, "receiveFileFromHandheld got " + receivedFileName +
" size:" + fs / 1000000 + "Mb");
cc.unregisterChannelCallback(this);
mClosed = true;
// success !
}
};
try {
Log.d(TAG, "receiveFileFromHandheld register callback");
Tasks.await(cc.registerChannelCallback(c, ccb));
Log.d(TAG, "receiveFileFromHandheld receiving file " + receivedFileName);
Tasks.await(cc.receiveFile(c, Uri.fromFile(new File(receivedFileName)), false));
// completion is indicated by onInputClosed
} catch (Exception e) {
Log.w(TAG, "receiveFileFromHandheld exception " + e.getMessage());
cc.unregisterChannelCallback(ccb);
// failure ...
}
}
}
).start();
}