My code is like below.
for($i = 0; $i <= 100; $i++) {
$objUser = [
"UserName" => $request["UserName"] . $i,
"EmailAddress" => $request["EmailAddress"] . $i,
"RoleID" => RoleEnum::ProjectManager,
"Password" => $request["Password"],
];
$RegisterResponse = $this->Register->Register($objUser);
$Data = $RegisterResponse["Data"];
$job = (new AccountActivationJob($Data));
dispatch($job);
}
Above code is creating 100 users and Each time a queue is being created to send email notification. I am using database default queue.
I have shared hosting account on GoDaddy. Due to some reasons the CPU usage reaches 100. Here is the screenshot.
Finally loop stops in between. Below is the screenshot after 5 mins.
Here, My problem is: It is not able to continue creating 100 users. I am doing this to test the sample queue implementation where multiple users send request for registration. Am I doing anything wrong?
As stated above, GoDaddy has a lot of resource limitations. You can only send 100 Emails an hour is what I have heard.
That also not at a single time. If it detects you are sending a lot of emails, your process is blocked.
Instead, you can queue up the messages to be sent 1 per 20 seconds or 30 seconds. It will help keep the resources in limits, and your emails are sent to the customers without any problem.
You can use the sleep function for this.
Godaddy does have a limit of resources you can use. If you go over it, it will kill the processes on ssh.
The limits are avaiable here
Try running the php process with a different nice parameter.
That's what I do when i need to use an artisan command that does use a lot of resources..
I did the findings and found that I should move to VPS instead of Shared hosting. here are the nice and cheap plans by GoDaddy. https://in.godaddy.com/hosting/vps-hosting
Related
My use case is the following :
Once every day I upload 1000 single page pdf to Azure Storage and process them with Form Recognizer via python azure-form-recognizer latest client.
So far I’m using the Async version of the client and I send the 1000 coroutines concurrently.
tasks = {asyncio.create_task(analyse_async(doc)): doc for doc in documents}
pending = set(tasks)
# Handle retry
while pending:
# backoff in case of 429
time.sleep(1)
# concurrent call return_when all completed
finished, pending = await asyncio.wait(
pending, return_when=asyncio.ALL_COMPLETED
)
# check if task has exception and register for new run.
for task in finished:
arg = tasks[task]
if task.exception():
new_task = asyncio.create_task(analyze_async(doc))
tasks[new_task] = doc
pending.add(new_task)
Now I’m not really comfortable with this setup. The main reason being the unpredictable successive states of the service in the same iteration. Can be up then throw 429 then up again. So not enough deterministic for me. I was wondering if another approach was possible. Do you think I should rather increase progressively the transactions. Start with 15 (default TPS) then 50 … 100 until the queue is empty ? Or another option ?
Thx
We need to enable the CORS and make some changes to that CORS to make it available to access the heavy workload.
Follow the procedure to implement the heavy workload in form recognizer.
Make it for page blobs here for higher and best performance.
Redundancy is also required. Make it ZRS for better implementation.
Create a storage account to upload the files.
Go to CORS and add the URL required.
Set the Allowed origins to https://formrecognizer.appliedai.azure.com
Go to containers and upload the documents.
Upload the documents. Use the container and blob information to give as the input for the recognizer. If the case is from Form Recognizer studio, the size of the total documents is considered and also the number of characters limit is there. So suggested to use the python code using the container created as the input folder.
The following script checks a sites content to see if any change has been done to it, every 10 seconds. It's for a very time sensitive application. If something on the site has changed, I merely have seconds to do something else. It will then start a new download and compare cycle and wait for the next change and do cycle. The do something else, has yet to be scripted and not relevant to the question.
The question: Will it be a problem for a public website to have a script downloading a single page every 10-15 seconds. If so, is there any other way to monitor a site, unmanned?
#!/bin/bash
Domain="example.com"
Ocontent=$(curl -L "$Domain")
Ncontent="$Ocontent"
until [ "$Ocontent" != "$Ncontent" ]; do
Ocontent=$(curl -L "$Domain")
#CONTENT CHANGED TRUE
#if [ "$Ocontent" == "$Ncontent ]; then
# Ocontent=$(curl -L "$Domain")
#fi
echo "$Ocontent"
sleep 10
done
The problems you're going to run into:
If the site notices and has a problem with it, you may end up on a banned IP list. Using an IP pool or other distributed resource can mitigate this.
Pinging a website precisely every x number of seconds is unlikely. Network latency is likely to cause a great deal of variance in this.
If you get a network partition, your code should know how to cope. (What if your connection goes down? What should happen?)
Note that getting the immediate response is only part of downloading a webpage. There may be changes to referenced files, such as css, javascript or images that are not immediately apparent from just the original http response.
I'm getting this message "Parallelize downloads across hostnames" when checking my WordPress site on GTmetrix > https://gtmetrix.com
Here are the details > https://gtmetrix.com/parallelize-downloads-across-hostnames.html
How do I fix that ?
Details
Web browsers put a limit on the number of concurrent connections they will make to a host. When there are many resources that need to be downloaded, a backlog of resources waiting to be downloaded will form. The browser will make up as many simultaneous connections to the server as the browser allows in order to download these resources, but then will queue the rest and wait for the requests to finish.
The time spent waiting for a connection to finish is referred to as blocking and reducing this blocking time can result in a faster loading page. The waterfall diagram below shows a page which loads 45 resources from the same host. Notice how long the resources are blocked (the brown segments), before they are downloaded (the purple segments) as they wait for a free connection.
So here is a hack to implement it on WordPress.
In order to work properly, all subdomains/hostnames MUST have the same structure/path. Ex:
example.com/wp-content/uploads/2015/11/myimage.jpg
media1.example.com/wp-content/uploads/2015/11/myimage.jpg
media2.example.com/wp-content/uploads/2015/11/myimage.jpg
Add to functions.php
function parallelize_hostnames($url, $id) {
$hostname = par_get_hostname($url);
$url = str_replace(parse_url(get_bloginfo('url'), PHP_URL_HOST), $hostname, $url);
return $url;
}
function par_get_hostname($name) {
//add your subdomains below, as many as you want.
$subdomains = array('media1.mydomain.com','media2.mydomain.com');
$host = abs(crc32(basename($name)) % count($subdomains));
$hostname = $subdomains[$host];
return $hostname;
}
add_filter('wp_get_attachment_url', 'parallelize_hostnames', 10, 2);
This is mainly due do HTTP/1.1 in which browsers open on average 6 connections per hostname.
If you are running over HTTPS with a provider that supports HTTP/2, this warning can usually be safely ignored now. With HTTP/2 multiple resources can now be loaded in parallel over a single connection.
--
However, if you need to fix it, you can follow the below steps:
Create additional subdomains such as:
domain.com static1.domain.com static2.domain.com
Simply add the following code to your WordPress theme’s functions.php file. And replace the $subdomains values with your subdomains.
All subdomains/hostnames MUST have the same structure/path.
function parallelize_hostnames($url, $id) {
$hostname = par_get_hostname($url); //call supplemental function
$url = str_replace(parse_url(get_bloginfo('url'), PHP_URL_HOST), $hostname, $url);
return $url;
}
function par_get_hostname($name) {
$subdomains = array('static1.domain.com','static2.domain.com');
$host = abs(crc32(basename($name)) % count($subdomains));
$hostname = $subdomains[$host];
return $hostname;
}
add_filter('wp_get_attachment_url', 'parallelize_hostnames', 10, 2);
Read more about the parallelize downloads across hostnames warning and why you probably don't need to worry about this anymore.
I am using translate API to translate some texts in my page, those texts are large html formated texts, so I had to develop a function that splits these texts into smaller pieces less than 4500 characters (including html tags) to avoid the limit of 5000 characters per request, also I had to modify the Google PHP API to allow send requests via POST.
I have enabled the paid version of the api in Goole Developers Console, and changed the total quota to 50M of characters per day and 500 requests/second/urser.
Now I am translating the whole database of texts with a script, it works fine but at some random points I revive the error "(403) User Rate Limit Exceeded", and I have to wait some minutes to re-run the script because when reached the error the api is returning the same error over and over until I wait some time.
I don't know why it keeps returning the error if I don't pass the number of requests, it's like it has some kind of maximum chaaracters per each interval of time or something...
You probably exceed the quota limits you set before: this is either the daily billable or the limit on the request characters per second.
To change the usage limits or request an increase to your quota, do the following:
1. Go to the Google Developers Console "https://console.developers.google.com/".
2. Select a project.
3. On the left sidebar, expand APIs & auth.
4. Click APIs.
5. Click the name of an activated API you're interested in "i.e. The Translate API".
6. Near the top of the info page for the API, click Quota.
If you have the billing enabled, just click Quota and it will take you to the quota page where you can view and change the quota-related settings.
If not, clicking Quota shows information about any free quota and limits that apply to the Translate API.
Google Developer Console has a rate limit of 10 requests per second, regardless of the settings or limits you may have changed.
You may be exceeding this limit.
I was unable to find any documentation around this, but could verify it myself with various API requests.
You control the characters limitation but not the concurrency
You are either making more than 500 concurrent request/second or you are using another Google API that is hitting such concurrency limitation.
The referer header is not set by default, but it is possible to add the headers to a request like so:
$result = $t->translate('Hola Mundo', [
'restOptions' => [
'headers' => [
'referer' => 'https://your-uri.com'
]
]
]);
If it makes more sense for you to set the referer at the client level (so all requests flowing through the client receive the header), this is possible as well:
$client = new TranslateClient([
'key' => 'my-api-key',
'restOptions' => [
'headers' => [
'referer' => 'https://your-uri.com'
]
]
]);
This worked for me!
Reference
In my case, this error was caused by my invalid payment information. Go to Billing area and make sure everything is ok.
I use the dot feature (m.yemail#gmail.com instead of myemail#gmail.com) to give emails for questionable sites so that I can easily spot spam from my address being sold.
I made this function and set it to trigger every 30 minutes to automatically filter these.
function moveSpamByAddress(){
var addresses = ["m.yemail#gmail.com"]
var threads = GmailApp.getInboxThreads();
for (var i = 0; i < threads.length; i++){
var messages = threads[i].getMessages();
for (var ii = 0; ii<messages.length; ii++){
for (var iii = 0; iii<addresses.length; iii++){
if (messages[ii].getTo().indexOf(addresses[iii]) > -1){
threads[i].moveToSpam()
}
}
}
}
}
This works, but I noticed that this runs slower than I would expect it to (but my expectation may be unreasonable) given that my inbox only contains 50 messages and I am only currently filtering one address. Is there a way to increase execution speed?
Also are there any penalties for running scripts too often? I see that I have the option to trigger a script every minute, and that would increase the likelihood of filtering a message before I see it, but it would also run the scripts uselessly significantly more times.
You can do this using native gmail filters plus apps script.
Script time quotas varies from 1 to 6 hours depending on account type.
To improve performance, first check getInboxUnreadCount and return inmediately if zero.
If you use a 1minute trigger, make sure to use a lock to avoid one timer starting while the other runs. If the lock is in use simply return.
First, make a gmail filter so when "to" matches your special address, apply a special label like "mySpam"
Second, make an apps script with my suggestions above, plus your code no longer needs to search so much, now you just need to find emails with that label (a single api call) and .moveToSpam
There shouldnt be that many at any time in the label if the script runs often.