Asana Batch tasks creation - heroku

I am getting 500 HTTP error on some random requests when 8 threads creates tasks.
Error messages are funny:
25 tan reindeer cheer equally
10 sad sharks rush well
Please let me know what is the best way for fast (parallel) task creation.

Can you give us examples of tasks where it failed? It could be related to follower/assignee, esp. by email.

Related

Scheduling a task periodically in an android app using WorkManager(Coroutine Worker)

I'm trying to run a task periodically every 12 hours, I'v used Work Manager and followed their documentation. The task is running periodically well as long as the user is actually using the app.
But if i close the app, or even just let my phone be idle for a while, it seems like the task stops working.
I searched for this problem on google, and came across posts such as this post, which i think explains the behaviour on my phone.
From my understanding, this problem is not related to work manager, but to all frameworks who will try to run background tasks?
Is there a way to still periodically run tasks on most devices or is WorkManager is still the way to go?
Thanks!
Please check this:
https://developer.android.com/topic/libraries/architecture/workmanager/how-to/debugging#use-alb-shell0dumpsys-jobscheduler
Required constraints: TIMING_DELAY CONNECTIVITY [0x90000000]
Satisfied constraints: DEVICE_NOT_DOZING BACKGROUND_NOT_RESTRICTED WITHIN_QUOTA [0x3400000]
Unsatisfied constraints: TIMING_DELAY CONNECTIVITY [0x90000000]
Minimum latency: +1h29m59s687ms
Run time: earliest=+38m29s834ms, latest=none, original latest=none
The periodic work is not exactly Periodic. You have different Constraints:
Explicit - you set them.
Implicit - set by the system related to Battery optimization. You should save the battery and also Internet usage.
When you have a "periodic work" you actually have an explicit constraint called:
TIMING_DELAY. But when the time is passed it does not mean that the work will start. It means that this constraint is satisfied and if and only all the other constraints are Satisfied - then the work will start.
And for example, you have your work with a "period" of 12 hours, but you wait an extra 4 hours for the other Constraints - you will have a period of 16 hours.
And after the work is finished - WorkManager will create a completely new job in the JobScheduler with TIMING_DELAY again - 12 hours. It will not account for the extra 4 hours. So you can't imagine something like:
I have 5 days so it means - 10 executions. It might be only 4 or 5 executions.
You can improve this by asking the user to exempt you from battery optimization:
https://developer.android.com/training/monitoring-device-state/doze-standby#support_for_other_use_cases
If you need to be really exact - you need to use AlarmManager, but mostly the idea of all of this is for the battery to be saved so it is not only about what the devs need, but also what the user needs.

Why are there holes in my cloudwatch logs?

I have been running lambdas using C# with serverless.com framework for some months now, and I consistently notice holes in the cloudwatch logs. So far it has only been an annoyance. I have been looking around for some explanation, but it is starting to get to the point where I need to understand/fix the problem.
For instance, today I can see the lambda monitor shows hundreds to thousands of executions between 7AM and 8AM, but the cloudwatch logs show logfiles up until 7:19AM and then nothing again until 8:52AM.
What is going on here?
Logs are by Invocation of the lambda and log group links are by concurrent executions. If you look at your lambda metrics, you will see a stat called ConcurrentExecution - this is the total number of simultaneous serverless lambda containers you have running at any given moment - but that does NOT equal the same as Invocations. The headless project im on is doing about 5k invocations an hour and we've never been above 5 concurrent executions of any of our 25ish lambda's (helps that they all run after start up at about 300ms)
So if you have 100 invocations in 10 seconds, but they all take less than a second to run, once a given lambda container is spun up it will be reused as long as it is continually receiving events. This is how AWS works around the 'cold start' problem as much as possible where a given lambda may take 10-15 or more seconds to start up. By trying to predict traffic flow (and you can manipulate these settings as well) AWS is attempting to have a warm lambda ready to go for you whenever you need it.
These concurrent executions are slowly shut down as their volume drops off, their calls brought back in to other ones that are still active.
What this means for Log Group logs is two fold:
you may see large 'gaps' in the times but if you look closely any given log group will have multiple invocations in it.
log groups are delayed by several seconds to several minutes depending on the server load, so at any given time you may not actually be seeing all the logs of a given moment.
The other possibility is that you logging is not set up correctly (Python lambda's in particular have difficulty in logging properly to cloudwatch - the default Logging Handler doesn't play nice with the way lambda boots up a handler to attach it to the logGroup) or what you are getting is a ton of hits that are not actually doing anything - only pings/keep alive events that do not actually trigger any of your log statement - at which you will generally only see the concurrent start up/shutdown log statements (as stated above they are far fewer)
What do you mean with gaps in log groups?
A log group gets its log by log streams and one of the same lambda container use the same log stream. So it may not be the most recent log stream in your log group that have the latest log entry.
Here you can read more about it:
https://dashbird.io/blog/how-to-save-hundreds-hours-debugging-lambda/
While trying to edit my question with screenshots and tallies of the data, I came upon the answer. I thought it would be helpful for this to be a separate answer as it is extremely specific and enlightening.
The crux of the problem is that I didn't expect such huge gaps between invocation times and log write times. 12 minutes is an eternity compared to the work I have done in the past.
Consider this graph:
12:59 UTC should be 7:59AM CST. Counting the invocations between 12:59 and 13:08, I get roughly ~110.
Cloudwatch shows these log streams:
Looking at these log streams, there seems to be a large gap. The timestamp on the log stream is the "file close" time. The logstream for 8:08:37 includes events from 12 minutes before.
So the timestamps on the log streams are not very useful for finding debug data. The search all has not been very helpful up until now either. Slow and very limited. I will look into some other method for crunching logs.

JMeter - How to execute a thread group every 10 minutes ( What is the best practice ? )

I have a situation where an API is called by 500 users/threads every 10 minutes.
I have created a jmeter script for this. It takes around 4 to 5 minutes to get response for all the 500 threads.
Now I have created a batch file to execute this jmx file. This batch file is then called every 10 minutes using task scheduler in windows.
I am not sure whether this is the best approach.
Have read about test action sampler / Timers / Think time etc.
Please can someone advise which is recommended in my case.
My requirement is to trigger the thread group every 10 minutes irrespective of how long the the previous run took.
According to Linus Torvalds
If it compiles, it is good; if it boots up, it is perfect
Given your approach works fine for your use case you should be good to go.
Personally I would be interested in test results as well (not sure how you're handling them), perhaps a better idea would be putting your script under orchestration of a Continuous Integration server like Jenkins, it provides flexible options on triggering the jobs (including scheduling) and you will be able to get some statistics, trends and conditionally mark your tests as passed or failed basing on the response time using Performance Plugin

scheduling a laravel command

So a while back I built an app for a company and now they'd like it revised a bit. The basis of the app is an RSVP system for health care Seminars and Consultations. They would now like to send email and text reminders once daily for to all clients whose seminar/consultation is within 24 hours. So I've created the code to pull all the seminars/consultations that occur within 24 hours. I know I will need to set up a CRON job on their server, which is not a big deal, but how would you go about the scheduling? Am I better off creating 2 new commands (one for seminars, one for consultations) and using the $schedule->command('some:newCommand --force)->dailyAt(2:00); in the Kernel.php file, or would you just $schedule->command('queue:work --force)->dailyAt(2:00); . I'm just trying to understand best/most efficient practice.
Just my 2 cents...
use $schedule->job(...) instead
put que:work in a supervised process
The schedule->job will figure out all the messages that need sending and make 1 job per message transmission. So, a job that makes more jobs.
Use the failed_jobs table to track if anyone didn't get the message.

Difference between Jmeter load test scenarios

I am testing asp.net website using Jmeter. I have used below scenarios to load test. Scenario 1 give me correct result(What I expect and can be wrong) and Scenario 2 is not giving same result. But I have used same number of requests within same time. Can someone explain me why is this?
Scenario 1.
Scenario 2.
Ramp up time does not determine when any of your tests are going to complete. It only controls when your test is going to start.
Also, the number of threads any test can create concurrently is limited to the memory you've allocated to JMeter. Even though you've set the thread count to 60000, if you've hit the maximum memory you've allocated, the threads will either queue up or never generate (you can watch the JMeter logs for thread creating or errors).
I recommend tuning your JMeter instance so you have some stability to your tests, here's a good guide. LINK
No of requests you have sent might be same. But the concurrent user load on the server is completely different.
I had clarified similar question few weeks ago. You can check the answer here.
Check Here

Resources