App Script My Executions Entries Delayed/Missing - debugging

Recently, I've been having issues with My Executions having delays in adding the execution entries.
The app script on the Google Sheet will run successfully (based on output), but there is no entry in the execution logs or the entry is delayed (10+ minutes).
More worrisome, today I received an automatically generated email with a script error:
"We're sorry, a server error occurred. Please wait a bit and try again."
But there was no corresponding entry in executions.
I've tried checking View/Stackdriver Logging, View/Stackdriver Error Reporting - which both go to the Apps Script Dashboard. And I've played with the filters, but the errors do not appear.

Related

Retry a transaction on Candy Machine

I am just finishing an upload of 8000 assets to candy machine (via the upload command). Everything seemed to be working well when it was creating the bundles and saving them to the cache, but once it started to write the indices I've started seeing these two errors on and off:
1)
Waiting 5 seconds to check Bundlr balance.
Requesting a withdrawal of 0.638239951 SOL from Bundlr...
Successfully withdrew 0.638244951 SOL.
Writing all indices in 719 transactions...
Progress: [█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░] 2% | 18/719Transaction simulation failed: Blockhash not found
Failed writing indices 3682-3691: Transaction was not confirmed in 60.01 seconds. It is unknown if it succeeded or fail.
I have been searching the internet and from what I can tell these errors are out of my control..is this correct? Or what can I do to get these indices to write successfully? Its at 50% progress right now but I assume the upload is not going to be successful when it finishes. If this is the case, do I need to run the candy machine upload command all over again or is there a way for me to just run the transaction portion (where it started to fail) again? I've seen some notes on retry but it wasn't completely clear to me.
The upload process took about 2.5 hours so would like to avoid that if at all possible.
Help is very much appreciated.
Both errors are common so you dont have to worry about it. You should use a custom RPC (using --rpc-url on the upload command) and wait till the upload command ends. When the upload command ends you have to use verify_upload command in order to see if everything went well (if verify_upload shows an error you have to run upload again and repet till verify_upload shows ready to deploy message).

cypress generates random error during cy.wait(...) only when running on gitlab-runner (VM)

I am running some cypress (version 9.5.2) tests on a project and when I execute it on my computer there is no problem. Everything is alright
But when I run it on my gitlab runner with docker as executor, I got some errors that I don't understand.
For example I got this one:
During a cy.wait(30000); at the middle I got this error
CypressError: Timed out after waiting 60000ms for your remote page
to load. Your page did not fire its load event within 60000ms. You
can try increasing the pageLoadTimeout value in cypress.json to
wait longer. Browsers will not fire the load event until all
stylesheets and scripts are done downloading.
So I don't understand to what this error is related, have you an idea ? I can share a video if you got any idea

LaunchServices logs XPC_ERROR_CONNECTION_INTERRUPTED in mac os x console

1.Service only ran for 0 seconds. Pushing respawn out by 10 seconds
2.LaunchServices: received XPC_ERROR_CONNECTION_INTERRUPTED trying to map database database
launchservices: database mapping failed with result -10822, retrying
I found this two logs related to my Application in console those logs are generated at every 10 seconds.
I search about it but didn’t get proper reason
https://discussions.apple.com/thread/7263229?tstart=0
https://forums.developer.apple.com/thread/16788
Any idea about this logs? Any help would be useful
For 1st warning please check all launch service(lauchchd) commands, And check which command cause this logs. please provide more logs relative to first warning.
(by restart your machine may stop log this warning)

Batch file called by scheduled task throws error when scheduled, runs fine when double clicked

I have a batch file that maps a networked drive. About a week or so ago the password expired, so the program calling the batch file started throwing errors.
I've updated the password in the batch file, and when I double click on the batch file, the drive maps fine. However, when the scheduled task kicks off, I get the following error:
Logon failure: unknown user name or bad password.
Anyone seen this before? I've tried recreating the scheduled task, but it doesn't seem to make any difference.
EDIT
I've updated the properties of the scheduled task, which isn't the problem. The problem seems to be the username and password in the batch file. The strange thing is if I log on interactively and double click the executable, everything works perfectly.
The last time the job ran it threw a "semaphore timeout period has expired" error. I've never seen this particular error before, but it seems like it was actually logged on and trying to copy files when this happened.
EDIT
I've revised my code to make it as simple as possible. I'm using a batch file to map the drive, then using code to transfer the files. I still run into the same issue - it works fine when I double click the batch file, but once I throw Scheduler into the picture, it throws a "Bad username or invalid password" error.
Occasionally when I'm trying to run the file by double clicking on it, I get a "Could not find part of the path" error. This says to me the drive mapping actually worked but something failed when it was trying to copy. (Most of the time, testing by double clicking works fine)
The username and password associated with the task when you created it is no longer valid or has changed.
This occurs generally when creating the task and forgetting to select the option not to store the password. When your password is set to expire, then you will meet this problem everytime you will have to reset your password. Make sure to do as on the image :
It sounds like the username and/or password associated with the scheduled task is no longer correct. The batch file is likely OK, you just need to change the properties of the scheduled task.
I ran into something similar while testing a new powershell script we wrote to create a scheduled task to back up to one or more network locations. I had to go through a number of iterations, and when I decreased from two network locations to one, the scheduled task stopped working, with individual steps in the called script giving "Logon failure: unknown user name or bad password" errors, though when I copied the arguments and ran them from the command line they worked.
After reading this question and Tim's comment, I tried deleting the scheduled task and re-creating it. After that it worked. I would concur that the scheduled task likely cached something.
From:
https://danblee.com/log-on-as-batch-job-rights-for-task-scheduler/
Go to the Start menu.
Run.
Type secpol.msc and press Enter.
The Local Security Policy manager opens.
Go to Security Settings – Local Policies – User Rights Assignment node.
Double click Log on as a batch job on the right side.
Click Add User or Group…
Select the user and click OK.

Reports won't run in batch

I am trying to run unmodified reports using batch processing in Microsoft Dynamics AX 2009. I have set up my configuration, and set up an AOS printer to run the report on. When I send a report to the batch queue, it immediately has an error when it begins execution.
The error is as follows:
Error executing code: SysGlobalCache object not initialized.
(S)\Classes\SysGlobalCache\get (S)\Classes\ClassFactory\reportRunClass
- line 14 (S)\Classes\RunBaseReport\makeReportRun - line 19 (S)\Classes\RunBaseReport\unpack - line 31
(S)\Classes\RunbaseReportStd\unpack - line 26
(S)\Classes\BatchRun\runJobStatic - line 27
I have tried running three different reports: Customer, Vendor, and Purchase Lines. I get the same error every time.
Any suggestions?
We faced a similar problem at my work, but didn't want to rely on having to set up the legacy batch processing method, suggested previously. Luckily in our case, it wasn't a requirement that the report actually be printed to hard-copy. So rather than try to send the report to a printer, you can run it to a file (ASCII, PDF, etc).
The batch server can process these, but since you'll need to specify a place to save the file, watch out for the following:
Be sure to use a UNC file path the path you wish to save to, otherwise you may get the following error: "Target file must be in UNC format."
Also be sure the necessary permissions have been applied to allow writing to that location, otherwise you'd get an error such as: "Unable to open file "
I believe the issue is that the batches are trying to process server code, and the reports are meant to run client side. Try the work around at this URL:
http://blogs.msdn.com/b/emeadaxsupport/archive/2009/06/16/how-to-run-client-batches-on-ax-2009.aspx
The gist is this, you create a batch group called "Client" or whatever, assign it to a batch server, then you run the legacy batch processor on the group. This might work for you.
Another option is to change the report to run on the server.
You'll need to check the menu item and make sure it's set to run on server.
It's a property on the menu item.
When you add a report to the batch, take a look at the batch inquiry screen.
Select the batch job - then click 'tasks'. If the task shows 'Run Location' = client, it won't run in the server-based batch framework.
Rob.
I was getting similar error. I restarted AOS and SQL reporting service and it worked all fine. Hope this helps.

Resources