GD+PHP: Exception to see if imagefail fails? - gd

imagetruecolortopalette($dst_image,true,$colorcount);
imagepalettecopy($dst_image,$src_image);
$transparentcolor = imagecolortransparent($src_image);
imagefill($dst_image,0,0,$transparentcolor);
imagecolortransparent($dst_image,$transparentcolor);
So I want something like:
if imagefill fails
unlink
exit with error
else
continue with imagefill
Or:
if imagefill takes more than X seconds
unlink
exit with error
else
continue with imagefill
Any ideas? Please help.

I ended up switching over to imagemagick.
-Two times faster.
-Uses less memory.
-No more errors.

Related

Does $argv behave the same between Centos and RHEL systems

I am trying to troubleshoot an old TCL accounting script called GOTS - Grant Of The System. What it does is creates a time stamped logfile entry for each user login and another for the logout. The problem is it is not creating the second log file entry on logout. I think I tracked down the area where it is going wrong and I have attached it here. FYI the log file exists and it does not exit with the error "GOTS was called incorrectly!!". It should be executing the if then for [string match "$argv" "end_session"]
This software runs properly on RHEL Linux 6.9 but fails as described on Centos 7. I am thinking that there is a system variable or difference in the $argv argument vector for the different systems that creates this behavior.
Am I correct in suspecting $argv and if not does anyone see the true problem?
How do I print or display the $argv values on logout?
# Find out if we're beginning or ending a session
if { [string match "$argv" "end_session"] } {
if { ![file writable $Log] } {
onErrorNotify "4 LOG"
}
set ifd [open $Log a]
puts $ifd "[clock format [clock seconds]]\t$Instrument\t$LogName\t$GroupName"
close $ifd
unset ifd
exit 0
} elseif { [string match "$argv" "begin_session"] == 0 } {
puts stderr "GOTS was called incorrectly!!"
exit -1
}
end_session is populated by the /etc/gdm/PostSession/Default file
#!/bin/sh
### Begin GOTS PostSession
# Do not run GOTS if root is logging out
if test "${USER}" == "root" ; then
exit 0
fi
/usr/local/lib/GOTS/gots end_session > /var/tmp/gots_postsession.log 2> /var/tmp/gots_postsession.log
exit 0
### End GOTS PostSession
This is the postsession log file:
Application initialization failed: couldn't connect to display ":1"
Error in startup script: invalid command name "option"
while executing
"option add *Font "-adobe-new century schoolbook-medium-r-*-*-*-140-*-*-*-*-*-*""
(file "/usr/local/lib/GOTS/gots" line 26)
After a lot of troubleshooting we have determined that for whatever reason Centos is not allowing part of the /etc/gdm/PostSession/default file to execute:
fi
/usr/local/lib/GOTS/gots end_session
But it does update the PostSession.log file as it should .. . Does anyone have any idea what could be interfering with only part of the PostSession/default?
Does anyone have any idea what could be interfereing with PostSession/default?
Could it be that you are hitting Bug 851769?
That said, am I correct in stating that, as your investigation shows, this is not a Tcl-related issue or question anymore?
So it turns out that our script has certain elements that depend upon the Xserver running on logout to display some of the GUI error messages. This from:
Gnome Configuration
"When a user terminates their session, GDM will run the PostSession script. Note that the Xserver will have been stopped by the time this script is run, so it should not be accessed.
Note that the PostSession script will be run even when the display fails to respond due to an I/O error or similar. Thus, there is no guarantee that X applications will work during script execution."
We are having to rewrite those error message callouts so they simply write the errors to a file instead of depending on the display. The errors are for things that should be there in the beginning anyway.

warning using Parallel::ForkManager but only in Windows

I sometimes get this warning when using Parallel::ForkManager but only in Windows, not on a Unix based system. What does it mean and should I worry about it?
child process '-17108' disappeared. A call to waitpid outside of
Parallel::ForkManager might have reaped it.
Here is the sample code from the docs that my code is based on:
use LWP::Simple;
use Parallel::ForkManager;
my #links=(
["http://www.foo.bar/rulez.data","rulez_data.txt"],
["http://new.host/more_data.doc","more_data.doc"],
);
# Max 30 processes for parallel download
my $pm = Parallel::ForkManager->new(30);
LINKS:
foreach my $linkarray (#links) {
$pm->start and next LINKS; # do the fork
my ($link, $fn) = #$linkarray;
warn "Cannot get $fn from $link"
if getstore($link, $fn) != RC_OK;
$pm->finish; # do the exit in the child process
}
$pm->wait_all_children;
I had the similar issue and placing a sleep 1 before "$pm->start and next LINKS;"
fixed the issue. I guess its due to continues forking, where Perl lost track of the fork processes. I may be wrong!

How can I check whether an exception has been logged for 30 seconds?

Background: I'd like to assert that no exceptions are written to a log for 30 seconds. Basically it's a smoke test to see if my application has come up and we haven't introduced any serious bugs.
Requirements: I'd like to do this using a bash script, preferably using common shell utilities. An exception is basically a single line that starts with !. There are a lot of other log lines written that are not exceptions.
Questions: How can I do this?
Here's one possible solution:
timeout 30 tail -F my.log -n 0|grep --line-buffered '^!'|head -n 1
. I can then check whether the exit code is 124 or 143 (timed out, don't know why it varies) or 0 (line found). This is my best bet so far. However, the solution doesn't seem to exit very quickly upon exception. I'd love to hear other solutions!
Assuming log file will only be updated by program on an event of exception.
You can use the following command:
stat log_file_name
And you'll get output similar to below. You can run stat after 30 sec or so, compare results of current and previous stat if you don't see any change in the timestamp then the file has not been modified or otherwise.
Access: 2015-03-27 15:22:17.000000000 +0530
Modify: 2015-03-27 15:22:16.000000000 +0530
Change: 2015-03-27 15:22:16.000000000 +0530

How to ignore error and continue with rest of the script

Some background. I want to delete AWS Redshift cluster and the process takes more than 30 minute. So this is what I want to do:
Start the deletion
Every 1 minute, check the cluster status (it should be “deleting”)
When the cluster is deleted, the command would fail (because it
cannot find the cluster anymore). So log some message and continue with rest of the script
This is the command I run in a while loop to check the cluster status after I start the deletion:
resp = redshift.client.describe_clusters(:cluster_identifier=>"blahblahblah")
Above command will get me cluster status as deleting while the deletion process continues. But once the cluster is completely deleted, then the command itself will fail as it cannot find the cluster blahblahblah.
Here is the error from command once the cluster is deleted:
/var/lib/gems/1.9.1/gems/aws-sdk-1.14.1/lib/aws/core/client.rb:366:in `return_or_raise': Cluster blahblahblah not found. (AWS::Redshift::Errors::ClusterNotFound)
I agree with this error. But this makes my script exit abruptly. So I want to log a message saying The cluster is deleted....continuing and continue with my script.
I tried below settings
resp = redshift.client.describe_clusters(:cluster_identifier=>"blahblahblah")
|| raise (“The cluster is deleted....continuing”)
I also tried couple of suggestion mentioned at https://www.ruby-forum.com/topic/133876
But this is not working. My script exits once above command fails to find the cluster.
Questions:
How to ignore the error, print my own message saying “The cluster is deleted....continuing” and continue with the script ?
Thanks.
def delete_clusters clusters=[]
cluster.each do |target_cluster|
puts "will delete #{target_clust}"
begin
while (some_condition) do
resp = redshift.client.describe_clusters(:cluster_identifier => target_clust)
# break condition
end
rescue AWS::Redshift::Errors::ClusterNotFound => cluster_exception
raise ("The cluster, #{target_clust} (#{cluster_excption.id}), is deleted....continuing")
end
puts "doing other things now"
# ....
end
end
#NewAlexandria, I changed your code to look like below:
puts "Checking the cluster status"
begin
resp = redshift.client.describe_clusters(:cluster_identifier=>"blahblahblah")
rescue AWS::Redshift::Errors::ClusterNotFound => cluster_exception
puts "The cluster is deleted....continuing"
end
puts "seems like the cluster is deleted and does not exist"
OUTPUT:
Checking the cluster status
The cluster is deleted....continuing
seems like the cluster is deleted and does not exist
I changed the raise to puts in the line that immediately follows the rescue line in your response. This way I got rid of the RuntimeError that I mentioned in my comment above.
I do not know what are the implication of this. I do not even know whether this is the right way to do it. But It shows the error when the cluster is not found and then continues with the script.
Later I read a lot of articles on ruby exception/rescue/raise/throw.... but that was just too much for me to understand as I do not belong to programming background at all. So, if you could explain what is going on here, it will really helpful for me to get more confidence in ruby.
Thanks for your time.

SSH error : name or service not known

I am getting this error because, there are 5 peripheral trays on 10 available slots(subjected to change at anytime) so i have no other option other than pinging all of them and performing a command(killall). Is there a way to disable viewing this error and just performing the operation if the tray is available and ignore other wise
PS: am writing ruby script
help me out
code goes like this
for i loop
ssh -f -n user#host_$i killall -9 process
Will this be a workable solution?? Added your code into a Exception handling block and not doing anything in the handle.
for i loop
Begin
ssh -f -n user#host_$i killall -9 process
Ensure Exception =>e
//Forget about logging anything
End
Curious if this block solves the problem, not a great solution , but tried to refine on the exception one ...
killports = 0
killedcount=false
if (killedcount===false)
while killports <= 10
begin
puts killports
killports=killports+1
killedcount=true
ssh -f -n user#host_$i killall -9 process rescue Exception =>e
puts "Comming to an exception"
if killports<=10 && killedcount===true then
killedcount=false
retry
else
raise
end
end
killedcount=false
end
end

Resources