I am using an Arduino Uno with the Desloo W5100 Ethernet shield. Whenever I try to make calls to Parse using Temboo, the device just hangs. Sometimes for minutes...sometimes indefinitely. Here is what I run:
void updateParseDoorState() {
if (!ENABLE_DOOR_STATE_PUSHES) {
Serial.println("Door state pushing disabled. Skipping.");
return;
}
Serial.println("Pushing door state to database...");
TembooChoreo UpdateObjectChoreo(client);
// Invoke the Temboo client
UpdateObjectChoreo.begin();
// Set Temboo account credentials
UpdateObjectChoreo.setAccountName(TEMBOO_ACCOUNT);
UpdateObjectChoreo.setAppKeyName(TEMBOO_APP_KEY_NAME);
UpdateObjectChoreo.setAppKey(TEMBOO_APP_KEY);
// Set profile to use for execution
UpdateObjectChoreo.setProfile("ParseAccount");
// Set Choreo inputs
String ObjectIDValue = "xxxxxxxxxx";
UpdateObjectChoreo.addInput("ObjectID", ObjectIDValue);
String ClassNameValue = "DoorState";
UpdateObjectChoreo.addInput("ClassName", ClassNameValue);
String ObjectContentsValue = (currentState == OPEN) ? "{\"isOpen\":true}" : "{\"isOpen\":false}";
UpdateObjectChoreo.addInput("ObjectContents", ObjectContentsValue);
// Identify the Choreo to run
UpdateObjectChoreo.setChoreo("/Library/Parse/Objects/UpdateObject");
// Run the Choreo; when results are available, print them to serial
int returnStatus = UpdateObjectChoreo.run();
if (returnStatus != 0){
setEthernetIndicator(EthernetStatus::SERVICES_DISCONNECTED);
Serial.print("Temboo error: "); Serial.println(returnStatus);
// read the name of the next output item
String returnResultName = UpdateObjectChoreo.readStringUntil('\x1F');
returnResultName.trim(); // use “trim” to get rid of newlines
Serial.print("Return result name: "); Serial.println(returnResultName);
// read the value of the next output item
String returnResultData = UpdateObjectChoreo.readStringUntil('\x1E');
returnResultData.trim(); // use “trim” to get rid of newlines
Serial.print("Return result data: "); Serial.println(returnResultData);
}
/*while(UpdateObjectChoreo.available()) {
char c = UpdateObjectChoreo.read();
Serial.print(c);
}*/
UpdateObjectChoreo.close();
Serial.println("Pushed door state to database!");
Serial.println("Waiting 30s to avoid overloading Temboo...");
delay(30000);
}
I get this in the serial monitor:
Current state:6666ÿ &‰ SP S P U WR SR R PR P 66Temboo error: 223
This indicates that there is some type of HTTP error, but I never get to print what the error is...because the serial monitor is stuck there forever. And eventually disconnects.
I work at Temboo.
It sounds like you might be running out of memory on your board (a common occurrence on resource-constrained hardware like Arduino). You can find our tutorial on how to conserve memory usage while using Temboo here:
https://temboo.com/hardware/profiles
Feel free to get in touch with Temboo Support at any time if you have further questions - we're always available and happy to help.
Related
EDIT:
I have heavily edited this question after making some significant new discoveries and the question not having any answers yet.
Historically/AFAIK, keeping your Mac awake while in closed-display mode and not meeting Apple's requirements, has only been possible with a kernel extension (kext), or a command run as root. Recently however, I have discovered that there must be another way. I could really use some help figuring out how to get this working for use in a (100% free, no IAP) sandboxed Mac App Store (MAS) compatible app.
I have confirmed that some other MAS apps are able to do this, and it looks like they might be writing YES to a key named clamshellSleepDisabled. Or perhaps there's some other trickery involved that causes the key value to be set to YES? I found the function in IOPMrootDomain.cpp:
void IOPMrootDomain::setDisableClamShellSleep( bool val )
{
if (gIOPMWorkLoop->inGate() == false) {
gIOPMWorkLoop->runAction(
OSMemberFunctionCast(IOWorkLoop::Action, this, &IOPMrootDomain::setDisableClamShellSleep),
(OSObject *)this,
(void *)val);
return;
}
else {
DLOG("setDisableClamShellSleep(%x)\n", (uint32_t) val);
if ( clamshellSleepDisabled != val )
{
clamshellSleepDisabled = val;
// If clamshellSleepDisabled is reset to 0, reevaluate if
// system need to go to sleep due to clamshell state
if ( !clamshellSleepDisabled && clamshellClosed)
handlePowerNotification(kLocalEvalClamshellCommand);
}
}
}
I'd like to give this a try and see if that's all it takes, but I don't really have any idea about how to go about calling this function. It's certainly not a part of the IOPMrootDomain documentation, and I can't seem to find any helpful example code for functions that are in the IOPMrootDomain documentation, such as setAggressiveness or setPMAssertionLevel. Here's some evidence of what's going on behind the scenes according to Console:
I've had a tiny bit of experience working with IOMProotDomain via adapting some of ControlPlane's source for another project, but I'm at a loss for how to get started on this. Any help would be greatly appreciated. Thank you!
EDIT:
With #pmdj's contribution/answer, this has been solved!
Full example project:
https://github.com/x74353/CDMManager
This ended up being surprisingly simple/straightforward:
1. Import header:
#import <IOKit/pwr_mgt/IOPMLib.h>
2. Add this function in your implementation file:
IOReturn RootDomain_SetDisableClamShellSleep (io_connect_t root_domain_connection, bool disable)
{
uint32_t num_outputs = 0;
uint32_t input_count = 1;
uint64_t input[input_count];
input[0] = (uint64_t) { disable ? 1 : 0 };
return IOConnectCallScalarMethod(root_domain_connection, kPMSetClamshellSleepState, input, input_count, NULL, &num_outputs);
}
3. Use the following to call the above function from somewhere else in your implementation:
io_connect_t connection = IO_OBJECT_NULL;
io_service_t pmRootDomain = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("IOPMrootDomain"));
IOServiceOpen (pmRootDomain, current_task(), 0, &connection);
// 'enable' is a bool you should assign a YES or NO value to prior to making this call
RootDomain_SetDisableClamShellSleep(connection, enable);
IOServiceClose(connection);
I have no personal experience with the PM root domain, but I do have extensive experience with IOKit, so here goes:
You want IOPMrootDomain::setDisableClamShellSleep() to be called.
A code search for sites calling setDisableClamShellSleep() quickly reveals a location in RootDomainUserClient::externalMethod(), in the file iokit/Kernel/RootDomainUserClient.cpp. This is certainly promising, as externalMethod() is what gets called in response to user space programs calling the IOConnectCall*() family of functions.
Let's dig in:
IOReturn RootDomainUserClient::externalMethod(
uint32_t selector,
IOExternalMethodArguments * arguments,
IOExternalMethodDispatch * dispatch __unused,
OSObject * target __unused,
void * reference __unused )
{
IOReturn ret = kIOReturnBadArgument;
switch (selector)
{
…
…
…
case kPMSetClamshellSleepState:
fOwner->setDisableClamShellSleep(arguments->scalarInput[0] ? true : false);
ret = kIOReturnSuccess;
break;
…
So, to invoke setDisableClamShellSleep() you'll need to:
Open a user client connection to IOPMrootDomain. This looks straightforward, because:
Upon inspection, IOPMrootDomain has an IOUserClientClass property of RootDomainUserClient, so IOServiceOpen() from user space will by default create an RootDomainUserClient instance.
IOPMrootDomain does not override the newUserClient member function, so there are no access controls there.
RootDomainUserClient::initWithTask() does not appear to place any restrictions (e.g. root user, code signing) on the connecting user space process.
So it should simply be a case of running this code in your program:
io_connect_t connection = IO_OBJECT_NULL;
IOReturn ret = IOServiceOpen(
root_domain_service,
current_task(),
0, // user client type, ignored
&connection);
Call the appropriate external method.
From the code excerpt earlier on, we know that the selector must be kPMSetClamshellSleepState.
arguments->scalarInput[0] being zero will call setDisableClamShellSleep(false), while a nonzero value will call setDisableClamShellSleep(true).
This amounts to:
IOReturn RootDomain_SetDisableClamShellSleep(io_connect_t root_domain_connection, bool disable)
{
uint32_t num_outputs = 0;
uint64_t inputs[] = { disable ? 1 : 0 };
return IOConnectCallScalarMethod(
root_domain_connection, kPMSetClamshellSleepState,
&inputs, 1, // 1 = length of array 'inputs'
NULL, &num_outputs);
}
When you're done with your io_connect_t handle, don't forget to IOServiceClose() it.
This should let you toggle clamshell sleep on or off. Note that there does not appear to be any provision for automatically resetting the value to its original state, so if your program crashes or exits without cleaning up after itself, whatever state was last set will remain. This might not be great from a user experience perspective, so perhaps try to defend against it somehow, for example in a crash handler.
I am writing a PCI driver with a character device for an interface (Linux 4.9.13). Here's the scenario that bothers me:
Run touch /dev/foo0 which creates a normal file in the /dev directory.
Load the driver module. Here's a pseudo code representing what happens there (pretty standard character device registration):
// When the module is initialized:
alloc_chrdev_region(&dev, 0, 256, "foo");
class = class_create(THIS_MODULE, "foo");
// Later, when a suitable PCI device is connected the probe function
// calls the following functions:
cdev_init(dev->md_cdev, &fops);
dev->md_devnum = MKDEV(major, 0 + index);
res = cdev_add(dev->md_cdev, dev->md_devnum, 1);
dev->md_sysfsdev = device_create(class, 0, dev->md_devnum, 0, "foo%d", index);
Details:
index is just another free index
What seems weird to me is nothing raises an error that there is already a /dev/foo0 file which is not a character device. I do check all the errors (I think so) but I omitted related code for the sake of conciseness. Everything works as expected if I do not run touch /dev/foo0. Otherwise, I can neither read nor write to the device.
Why is it so? Shouldn't device_create return an error or at least create /dev/foo1 instead?
I'm using NetMQ (Nuget 3.3.2.2) on .NET 4.5 and I have a single fast generator process with a PUSH socket, and a single slow consumer process using a PULL socket. If I send enough messages to hit the sending HWM, the sending process blocks the thread indefinitely.
Some contrived (generator) code which illustrates the problem:
using (var ctx = NetMQContext.Create())
using (var pushSocket = ctx.CreatePushSocket())
{
pushSocket.Connect("tcp://127.0.0.1:42404");
var template = GenerateMessageBody(i);
for (int i = 1; i <= 100000; i++)
{
pushSocket.SendMoreFrame("SampleMessage").SendFrame(Messages.SerializeToByteArray(template));
if (i % 1000 == 0)
Console.WriteLine("Sent " + i + " messages");
}
Console.WriteLine("All finished");
Console.ReadKey();
}
On my configuration, this will usually report it has sent about 5000 or 6000 messages, and will then simply block. If I set the send HWM set to a large value (or 0), then it sends all of the messages as expected.
It looks like it's waiting to receive another command before it tries again, here: (SocketBase.TrySend)
// Oops, we couldn't send the message. Wait for the next
// command, process it and try to send the message again.
// If timeout is reached in the meantime, return EAGAIN.
while (true)
{
ProcessCommands(timeoutMillis, false);
From what I've read in the 0MQ guide, blocking on a full PUSH sockeet is the correct behaviour (and is what I want it to do), however I would expect it to recover once the consumer has cleared its queue.
Short of using some sort of TrySend pattern and dealing with the block myself, is there some option I can set or some other facility I can use to have the PUSH socket attempt to resend blocked messages periodically?
Hi I have a problem in sending message in project, I am using pic16f877a and sim300. the main function runs repeatedly. Some characters are missed in the sent sms.
my program is like this...
void main()//main function
{
Serial_init(); // initialization of serial communication
Send_SMS();
}
void Serial_init()
{
TRISC=0XC0;
TXSTA=0x24;
SPBRG=129; // set baud rate 9600 Hz for 20MHz fosc
RCSTA=0x90;
TXIF=1;
}
void Send_SMS(void)
{
USART_puts("AT\0");
putch1(0x0D);
Delay_ms4M(200);
USART_puts("AT+CMGF=1\0"); // switch into text mode
putch1(0x0D);// ascii of Carriage Return
Delay_ms4M(200);
USART_puts("AT+CMGS=\"9741153218\"\0"); // send sms to the number
putch1(0x0D);
Delay_ms4M(200);
USART_puts("Hi this is working LOL\0"); // SMS text
putch1(0x0A); // new line
Delay_ms4M(200);
putch1(0x0D);
Delay_ms4M(100);
putch1(0x1A); // ascii of 'substitute' i.e end of file
}
void USART_puts(const unsigned char *string)
{
while(*string)
putch1(*string++);
}
void putch1(unsigned char data)
{
while(TXIF==0);
TXREG=data;
}
Please help
additional details: all other programs run properly, but if I call send_sms function, "main" runs repeatedly and several messages are sent with missed characters.
IMHO :
Your chip is resetting. This is the highest probable cause.
Either it is faulty or you have set Watchdog Timer to on somewhere.
For missing characters :
a) Chip resets in the midst of a data transfer.
b) Roule of thumb for usart:
Stop stuffing bytes to usart. Send each byte with a small leading delay like 10-20 microseconds.
The communication is asynchronous, which means the receiver has to synchronize at the beginning of each communication unit which is a byte. To do that receiver brutely uses resources to detect start bit, the length (in time) of it etc. So if you try to send a byte train, you will stall the receiver.
Have you tried the code with another 16F877a ? (to check chip failure)...
I'm doing a very simple test with queues pointing to the real Azure Storage and, I don't know why, executing the test from my computer is quite faster than deploy the worker role into azure and execute it there. I'm not using Dev Storage when I test locally, my .cscfg is has the connection string to the real storage.
The storage account and the roles are in the same affinity group.
The test is a web role and a worker role. The page tells to the worker what test to do, the the worker do it and returns the time consumed. This specific test meassures how long takes get 1000 messages from an Azure Queue using batches of 32 messages. First, I test running debug with VS, after I deploy the app to Azure and run it from there.
The results are:
From my computer: 34805.6495 ms.
From Azure role: 7956828.2851 ms.
That could mean that is faster to access queues from outside Azure than inside, and that doesn't make sense.
I'm testing like this:
private TestResult InQueueScopeDo(String test, Guid id, Int64 itemCount)
{
CloudStorageAccount account = CloudStorageAccount.Parse(_connectionString);
CloudQueueClient client = account.CreateCloudQueueClient();
CloudQueue queue = client.GetQueueReference(Guid.NewGuid().ToString());
try
{
queue.Create();
PreTestExecute(itemCount, queue);
List<Int64> times = new List<Int64>();
Stopwatch sw = new Stopwatch();
for (Int64 i = 0; i < itemCount; i++)
{
sw.Start();
Boolean valid = ItemTest(i, itemCount, queue);
sw.Stop();
if (valid)
times.Add(sw.ElapsedTicks);
sw.Reset();
}
return new TestResult(id, test + " with " + itemCount.ToString() + " elements", TimeSpan.FromTicks(times.Min()).TotalMilliseconds,
TimeSpan.FromTicks(times.Max()).TotalMilliseconds,
TimeSpan.FromTicks((Int64)Math.Round(times.Average())).TotalMilliseconds);
}
finally
{
queue.Delete();
}
return null;
}
The PreTestExecute puts the 1000 items on the queue with 2048 bytes each.
And this is what happens in the ItemTest method for this test:
Boolean done = false;
public override bool ItemTest(long itemCurrent, long itemCount, CloudQueue queue)
{
if (done)
return false;
CloudQueueMessage[] messages = null;
while ((messages = queue.GetMessages((Int32)itemCount).ToArray()).Any())
{
foreach (var m in messages)
queue.DeleteMessage(m);
}
done = true;
return true;
}
I don't what I'm doing wrong, same code, same connection string and I got these resuts.
Any idea?
UPDATE:
The problem seems to be in the way I calculate it.
I have replaced the times.Add(sw.ElapsedTicks); for times.Add(sw.ElapsedMilliseconds); and this block:
return new TestResult(id, test + " with " + itemCount.ToString() + " elements",
TimeSpan.FromTicks(times.Min()).TotalMilliseconds,
TimeSpan.FromTicks(times.Max()).TotalMilliseconds,
TimeSpan.FromTicks((Int64)Math.Round(times.Average())).TotalMilliseconds);
for this one:
return new TestResult(id, test + " with " + itemCount.ToString() + " elements",
times.Min(),times.Max(),times.Average());
And now the results are similar, so apparently there is a difference in how the precision is handled or something. I will research this later on.
The problem apparently was a issue with different nature of the StopWatch and TimeSpan ticks, as discussed here.
Stopwatch.ElapsedTicks Property
Stopwatch ticks are different from DateTime.Ticks. Each tick in the DateTime.Ticks value represents one 100-nanosecond interval. Each tick in the ElapsedTicks value represents the time interval equal to 1 second divided by the Frequency.
How is your CPU utilization? Is this possible that your code is spiking the CPU and your workstation is much faster than your Azure node?