How to disable THREE notification messages? - three.js

I want to disable console.log's like THREE.WebGLRenderer: Context Lost,
OBJLoader: 1.8330078125ms and so on. Do you have any suggestions?

There is no built-in way to disable these messages, with the exception of OBJLoader2.setLogging.
Messages like these are extremely useful for debugging, not just in your development environment, but also when your code is out in the field.
But if you're hard-set on eliminating these messages, you can redirect logging that uses the console object.
// Place this at the start of your code
const log = console.log;
console.log = () => {};
const warn = console.warn;
console.warn = () => {};
const error = console.error;
console.error = () => {};
With this, anything calling console.log, console.warn, and console.error will be silenced, even es6 modules outside the scope of your main file. This even applies to console message managers like debug.
But YOU can still write to the console by using the redirected functions. For example:
// In your code...
log("test message"); // will print "test message" to the console
This works only because you saved references to the original functions off into the variables like const log.

Download the source, search and replace all console.log, console.warn, and console.error messages. Rebuild the library.
git clone https://github.com/mrdoob/three.js.git
cd three.js
npm install
find src -type f -name '*.js' -exec sed -i '' s/console\\\./\\/\\/console\\./ {} +
find examples/jsm -type f -name '*.js' -exec sed -i '' s/console\\\./\\/\\/console\\./ {} +
find examples/js -type f -name '*.js' -exec sed -i '' s/console\\\./\\/\\/console\\./ {} +
npm run build
note those search and replace lines assume each console line is on a single line by itself.
After searching and replacing (and before building) you can check the results with
git diff
If the results are not correct you can reset all the files to their previous state with
git reset --hard
and then try different expressions or use your favorite text editor's search and replace across files.
People often whine they don't want to change the source but three.js in particular is arguably about changing the source. Three's policy is that you should download a particular version of three.js and write your code to match that specific version. They are then free to break anything and everything with each new version. They make no effort to stay backward compatible between versions so hack your version however you need it to be hacked for your needs.
In the case above, modifying the library is especially trivial given you can practically automate it so if you do take a newer version, after you've fixed all the new incompatibilities you can run these steps again.

The short answer is something like this, which is a spin off of code suggested by mr.doob himself, ironically:
const vrgc = {};
vrgc.console = {
log: console.log,
info: console.info,
warn: console.warn,
error: console.error,
};
Object.keys(vrgc.console)
.forEach(
key => {
console.warn(`hiding console.${key} calls from THREE`)
console[key] = function() {
if ( (typeof arguments[ 0 ] === 'string') && (arguments[ 0 ].substr( 0, 5 ) === 'THREE') ) {
return
// ignore THREE
}
else {
const originalFunc = vrgc.console[key];
originalFunc.apply( console, arguments );
}
}
}
);
Unfortunately, this request has a loooong history of 6+ years... see here, here, here, here, et al...

Adds a logging parameter to the WebGL renderer.
Useful for situations where you don't want logging at all such as production or in a testing environment. Retains true as default but would think false would be a better option? https://github.com/mrdoob/three.js/pull/5835

Related

Problem Generating Html Report Using DbUp during Octopus Deployment

Using Octopus Deploy to deploy a simple API.
The first step of our deployment process is to generate an HTML report with the delta of the scripts run vs the scripts required to run. I used this tutorial to create the step.
The relevant code in my console application is:
var reportLocationSection = appConfiguration.GetSection(previewReportCmdLineFlag);
if (reportLocationSection.Value is not null)
{
// Generate a preview file so Octopus Deploy can generate an artifact for approvals
try
{
var report = reportLocationSection.Value;
var fullReportPath = Path.Combine(report, deltaReportName);
Console.WriteLine($"Generating upgrade report at {fullReportPath}");
upgrader.GenerateUpgradeHtmlReport(fullReportPath);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
return operationError;
}
}
The Powershell which I am using in the script step is:
# Get the extracted path for the package
$packagePath = $OctopusParameters["Octopus.Action.Package[DatabaseUpdater].ExtractedPath"]
$connectionString = $OctopusParameters["Project.Database.ConnectionString"]
$reportPath = $OctopusParameters["Project.HtmlReport.Location"]
Write-Host "Report Path: $($reportPath)"
$exeToRun = "$($packagePath)\DatabaseUpdater.exe"
$generatedReport = "$($reportPath)\UpgradeReport.html"
Write-Host "Generated Report: $($generatedReport)"
if ((test-path $reportPath) -eq $false){
New-Item "Creating new directory..."
} else {
New-Item "Directory already exists."
}
# Run this .NET app, passing in the Connection String and a flag
# which tells the app to create a report, but not update the database
& $exeToRun --connectionString="$($connectionString)" --previewReportPath="$($reportPath)"
New-OctopusArtifact -Path "$($generatedReport)"
The error reported by Octopus is:
'Could not find file 'C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html'.'
I'm guessing that is being thrown when this powershell line is hit: New-OctopusArtifact ...
And that seems to indicate that the report was never created.
I've used a bit of logging to log out certain variables and the values look sound:
Report Path: C:\DeltaReports\Some API\2.9.15-DbUp-Test-9
Generated Report: C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html
Generating upgrade report at C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html
As you can see in the C#, the relevant code is wrapped in a try/catch block, but I'm not sure whether the error is being written out there or at a later point by Octopus (I'd need to do a pull request to add a marker in the code).
Can anyone see a way forward win resolving this? Has anyone else encountered this?
Cheers
I recently redid some of the work from that article for this video up on YouTube. I did run into some issues with the .SQL files not being included in the assembly. I think it was after I upgraded to .NET 6. But that might be a coincidence.
Anyway, because the files weren't being included in the assembly, when I ran the command line app via Octopus, it wouldn't properly generate the file for me. I ended up configuring the project to copy the .SQL files to a folder in the output directory instead of embedding them in the assembly. You can view a sample package here.
One thing that helped me is running the app in a debugger with the same parameters just to make sure it was actually generating the file. I'm sure you already thought of that, but I'd be remiss if I forgot to include it in my answer. :)
FWIW, this is my updated scripts.
First, the Octopus Script:
$packagePath = $OctopusParameters["Octopus.Action.Package[Trident.Database].ExtractedPath"]
$connectionString = $OctopusParameters["Project.Connection.String"]
$environmentName = $OctopusParameters["Octopus.Environment.Name"]
$reportPath = $OctopusParameters["Project.Database.Report.Path"]
cd $packagePath
$appToRun = ".\Octopus.Trident.Database.DbUp"
$generatedReport = "$reportPath\UpgradeReport.html"
& $appToRun --ConnectionString="$connectionString" --PreviewReportPath="$reportPath"
New-OctopusArtifact -Path "$generatedReport" -Name "$environmentName.UpgradeReport.html"
My C# code can be found here but for ease of use, you can see it all here (I'm not proud of how I parse the parameters).
static void Main(string[] args)
{
var connectionString = args.FirstOrDefault(x => x.StartsWith("--ConnectionString", StringComparison.OrdinalIgnoreCase));
connectionString = connectionString.Substring(connectionString.IndexOf("=") + 1).Replace(#"""", string.Empty);
var executingPath = Assembly.GetExecutingAssembly().Location.Replace("Octopus.Trident.Database.DbUp", "").Replace(".dll", "").Replace(".exe", "");
Console.WriteLine($"The execution location is {executingPath}");
var deploymentScriptPath = Path.Combine(executingPath, "DeploymentScripts");
Console.WriteLine($"The deployment script path is located at {deploymentScriptPath}");
var postDeploymentScriptsPath = Path.Combine(executingPath, "PostDeploymentScripts");
Console.WriteLine($"The deployment script path is located at {postDeploymentScriptsPath}");
var upgradeEngineBuilder = DeployChanges.To
.SqlDatabase(connectionString, null)
.WithScriptsFromFileSystem(deploymentScriptPath, new SqlScriptOptions { ScriptType = ScriptType.RunOnce, RunGroupOrder = 1 })
.WithScriptsFromFileSystem(postDeploymentScriptsPath, new SqlScriptOptions { ScriptType = ScriptType.RunAlways, RunGroupOrder = 2 })
.WithTransactionPerScript()
.LogToConsole();
var upgrader = upgradeEngineBuilder.Build();
Console.WriteLine("Is upgrade required: " + upgrader.IsUpgradeRequired());
if (args.Any(a => a.StartsWith("--PreviewReportPath", StringComparison.InvariantCultureIgnoreCase)))
{
// Generate a preview file so Octopus Deploy can generate an artifact for approvals
var report = args.FirstOrDefault(x => x.StartsWith("--PreviewReportPath", StringComparison.OrdinalIgnoreCase));
report = report.Substring(report.IndexOf("=") + 1).Replace(#"""", string.Empty);
if (Directory.Exists(report) == false)
{
Directory.CreateDirectory(report);
}
var fullReportPath = Path.Combine(report, "UpgradeReport.html");
if (File.Exists(fullReportPath) == true)
{
File.Delete(fullReportPath);
}
Console.WriteLine($"Generating the report at {fullReportPath}");
upgrader.GenerateUpgradeHtmlReport(fullReportPath);
}
else
{
var result = upgrader.PerformUpgrade();
// Display the result
if (result.Successful)
{
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("Success!");
}
else
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine(result.Error);
Console.WriteLine("Failed!");
}
}
}
I hope that helps!
After long and detailed investigation, we discovered the answer was quite obvious.
We assumed the existing deploy process configuration was sound. Because we never had a problem with it (until now). As it transpires, there was a problem which led to the Development deployments being deployed twice.
Hence, the errors like the one above and others which talked about file handles being held by another process.
It was actually obvious in hindsight, but we were blind to it as we thought the existing process was sound 😣

Can asynchronous module definitions be used with abstract syntax trees on v8 engine to read third party dependencies? [duplicate]

I understand eval string-to-function is impossible to use on the browsers' application programming interfaces, but there must be another strategy to use third party dependencies without node.js on v8 engine, given Cloudflare does it in-house, unless they disable the exclusive method by necessity or otherwise on their edge servers for Workers. I imagine I could gather the AST of the commonjs module, as I was able to by rollup watch, but what might the actual steps be, by tooling? I mention AMD for it seems to rely on string-to-function (to-which I've notice Mozilla MDN says nothing much about it).
I have been exploring the require.js repositories, and they either use eval or AST
function DEFNODE(type, props, methods, base) {
if (arguments.length < 4) base = AST_Node;
if (!props) props = [];
else props = props.split(/\s+/);
var self_props = props;
if (base && base.PROPS) props = props.concat(base.PROPS);
var code = "return function AST_" + type + "(props){ if (props) { ";
for (var i = props.length; --i >= 0; ) {
code += "this." + props[i] + " = props." + props[i] + ";";
}
var proto = base && new base();
if ((proto && proto.initialize) || (methods && methods.initialize))
code += "this.initialize();";
code += "}}";
//constructor
var cnstor = new Function(code)();
if (proto) {
cnstor.prototype = proto;
cnstor.BASE = base;
}
if (base) base.SUBCLASSES.push(cnstor);
cnstor.prototype.CTOR = cnstor;
cnstor.PROPS = props || null;
cnstor.SELF_PROPS = self_props;
cnstor.SUBCLASSES = [];
if (type) {
cnstor.prototype.TYPE = cnstor.TYPE = type;
}
if (methods)
for (i in methods)
if (HOP(methods, i)) {
if (/^\$/.test(i)) {
cnstor[i.substr(1)] = methods[i];
} else {
cnstor.prototype[i] = methods[i];
}
}
//a function that returns an object with [name]:method
cnstor.DEFMETHOD = function (name, method) {
this.prototype[name] = method;
};
if (typeof exports !== "undefined") exports[`AST_${type}`] = cnstor;
return cnstor;
}
var AST_Token = DEFNODE(
"Token",
"type value line col pos endline endcol endpos nlb comments_before file raw",
{},
null
);
https://codesandbox.io/s/infallible-darwin-8jcl2k?file=/src/mastercard-backbank/uglify/index.js
https://www.youtube.com/watch?v=EF7UW9HxOe4
Is it possible to make a C++ addon just to add a default object for
node.js named exports or am I Y’ing up the wrong X
'.so' shared library for C++ dlopen/LoadLibrary (or #include?)
“I have to say that I'm amazed that there is code out there that loads one native addon from another native addon! Is it done by acquiring and then calling an instance of the require() function, or perhaps by using uv_dlopen() directly?”
N-API: An api for embedding Node in applications
"[there is no ]napi_env[ just yet]."
node-api: allow retrieval of add-on file name - Missing module in Init
Andreas Rossberg - is AST parsing, or initialize node.js abstraction for native c++, enough?
v8::String::NewFromUtf8(isolate, "Index from C++!");
Rising Stack - Node Source
"a macro implicit" parameter - bridge object between
C++ and JavaScript runtimes
extract a function's parameters and set the return value.
#include <nan.h>
int build () {
NAN_METHOD(Index) {
info.GetReturnValue().Set(
Nan::New("Index from C++!").ToLocalChecked()
);
}
}
// Module initialization logic
NAN_MODULE_INIT(Initialize) {
/*Export the `Index` function
(equivalent to `export function Index (...)` in JS)*/
NAN_EXPORT(target, Index);
}
New module "App" Initialize function from NAN_MODULE_INIT (an atomic?-macro)
"__napi_something doesn't exist."
"node-addon-API module for C++ code (N-API's C code's headers)"
NODE_MODULE(App, Initialize);
Sep 17, 2013, 4:42:17 AM to v8-u...#googlegroups.com "This comes up
frequently, but the answer remains the same: scrap the idea. ;)
Neither the V8 parser nor its AST are designed for external
interfacing. In particular (1) V8's AST does not necessarily reflect
JavaScript syntax 1-to-1, (2) we change it all the time, and (3) it
depends on various V8 internals. And since all these points are
important for V8, don't expect the situation to change.
/Andreas"
V8 c++: How to import module via code to script context (5/28/22, edit)
"The export keyword may only be used in a module interface unit.
The keyword is attached to a declaration of an entity, and causes that
declaration (and sometimes the definition) to become visible to module
importers[ - except for] the export keyword in the module-declaration, which is just a re-use of the keyword (and does not actually “export” ...entities)."
SyntheticModule::virtual
ScriptCompiler::CompileModule() - "Corresponds to the ParseModule abstract operation in the ECMAScript specification."
Local<Function> foo_func = ...;//external
Local<Module> module = Module::CreateSyntheticModule(
isolate, name,
{String::NewFromUtf8(isolate, "foo")},
[](Local<Context> context, Local<Module> module) {
module->SetSyntheticModuleExport(
String::NewFromUtf8(isolate, "foo"), foo_func
);
});
Context-Aware addons from node.js' commonjs modules
export module index;
export class Index {
public:
const char* app() {
return "done!";
}
};
import index;
import <iostream>;
int main() {
std::cout << Index().app() << '\n';
}
node-addon-api (new)
native abstractions (old)
"Thanks to the crazy changes in V8 (and some in Node core), keeping native addons compiling happily across versions, particularly 0.10 to 0.12 to 4.0, is a minor nightmare. The goal of this project is to store all logic necessary to develop native Node.js addons without having to inspect NODE_MODULE_VERSION and get yourself into a macro-tangle[ macro = extern atomics?]."
Scope Isolate (v8::Isolate), variable Local (v8::Local)
typed_array_to_native.cc
"require is part of the Asynchronous Module Definition AMD API[, without "string-to-function" eval/new Function()],"
node.js makes objects, for it is written in C++.
"According to the algorithm, before finding
./node_modules/_/index.js, it tried looking for express in the
core Node.js modules. This didn’t exist, so it looked in node_modules,
and found a directory called _. (If there was a
./node_modules/_.js, it would load that directly.) It then
loaded ./node_modules/_/package.json, and looked for an exports
field, but this didn’t exist. It also looked for a main field, but
this didn’t exist either. It then fell back to index.js, which it
found. ...require() looks for node_modules in all of the parent directories of the caller."
But java?
I won't accept this answer until it works, but this looks promising:
https://developer.oracle.com/databases/nashorn-javascript-part1.html
If not to run a jar file or something, in the Worker:
https://github.com/nodyn/jvm-npm
require and build equivalent in maven, first, use "dist/index.js".
Specifically: [ScriptEngineManager][21]
https://stackoverflow.com/a/15787930/11711280
Actually: js.commonjs-require experimental
https://docs.oracle.com/en/graalvm/enterprise/21/docs/reference-manual/js/Modules/
Alternatively/favorably: commonjs builder in C (v8 and node.js)
https://www.reddit.com/r/java/comments/u7elf4/what_are_your_thoughts_on_java_isolates_on_graalvm/
Here I will explore v8/node.js src .h and .cc for this purpose
https://codesandbox.io/s/infallible-darwin-8jcl2k?file=/src/c.cpp
I'm curious why there is near machine-level C operability in Workers if not to use std::ifstream, and/or build-locally, without node.js require.

Simple MediaWiki extension debugging

I am trying to write my very first MediaWiki extension and need some way to debug it. What is the simplest way to do it? Showing a message, logging into a file etc. would be fine. I just want to slowly progress over the code and see where it breaks and what the content of a variable is.
I've tried (from http://www.mediawiki.org/wiki/Manual:How_to_debug#Useful_debugging_functions)
// ...somewhere in your code
if ( true ) {
wfDebugLog( 'myext', 'Something is not right: ' . print_r( 'asdf', true ) );
}
in extensions/myext/myext.php and added to LocalSettings.php
require_once( 'extensions/myext/myext.php' );
# debugging on
$wgDebugLogGroups = array(
'myext' => 'extensions/myext/myextension.log'
);
but then my Wiki doesn't work at all (error 500). With the above code removed from myext.php everything's fine (with $wgExtensionCredits in myext.php, I can see myext in the Special:Version).
Is it the right thing to do (then what is the mistake) or is there a better/simpler way to start with?
500 means you have a syntax error or wrong configuration somewhere. Have you followed the instructions at Manual:How to debug and turned on PHP logging, so you can at least see what is causing the error? Alternatively, take a look at your Apache server log.
Also, you'll want to turn on debugging before you load your own extension!
Add these to LocalSettings.php for debugging:
error_reporting( -1 );
ini_set( 'display_startup_errors', 1 );
ini_set( 'display_errors', 1 );
$wgShowExceptionDetails=true;
$wgDebugToolbar=true;
$wgShowDebug=true;
$wgDevelopmentWarnings=true;
$wgDebugDumpSql = true;
$wgDebugLogFile = '/tmp/debug.log';
$wgDebugComments = true;
$wgEnableParserCache = false;
$wgCachePages = false;
You can log debug messages with wfDebug();
Learn more at https://www.mediawiki.org/wiki/Manual:Structured_logging/en

problem in imagemagick and grails

i have a new problem in image magick that look strange ..
i'm using mac osx snow leopard and i've installed image magick on it and it's working fine on command ..
but when i call it from the grails class like the following snippet it gives me
"Cannot run program "convert": error=2, No such file or directory"
the code is :-
public static boolean resizeImage(String srcPath, String destPath,String size) {
ArrayList<String> command = new ArrayList<String>(10);
command.add("convert");
command.add("-geometry");
command.add(size);
command.add("-quality");
command.add("100" );
command.add(srcPath);
command.add(destPath);
System.out.println(command);
return exec((String[])command.toArray(new String[1]));
}
private static boolean exec(String[] command) {
Process proc;
try {
//System.out.println("Trying to execute command " + Arrays.asList(command));
proc = Runtime.getRuntime().exec(command);
} catch (IOException e) {
System.out.println("IOException while trying to execute " );
for(int i =0 ; i<command.length; i++) {
System.out.println(command[i]);
}
return false;
}
//System.out.println("Got process object, waiting to return.");
int exitStatus;
while (true) {
try {
exitStatus = proc.waitFor();
break;
} catch (java.lang.InterruptedException e) {
System.out.println("Interrupted: Ignoring and waiting");
}
}
if (exitStatus != 0) {
System.out.println("Error executing command: " + exitStatus);
}
return (exitStatus == 0);
}
i've tried normal command like ls and it's ok so the problem is that grails can't find convert command itself.. is it a os problem or something?
(see lower for the answer)
I have run into the same problem. The issue appears to be something with Mac OS X specifically, as we have several Linux instances running without error. The error looks similar to the following:
java.io.IOException: Cannot run program "/usr/bin/ImageMagick-6.7.3/bin/convert /a/temp/in/tmpPic3143119797006817740.png /a/temp/out/100000726.png": error=2, No such file or directory
All the files are there, and in chmod 777 directories - and as you pointed out, running the exact command from the shell works fine.
My theory at this point is that imagemgick can not load some sort of library itself, and the "no such file" is in reference to an dylib or something along those lines.
I have tried setting LD_LIBRARY_PATH and a few others to no avail.
I finally got this working. Here is how I have it setup. I hope this helps.
The crux of the fix, for me, was I wrapped the 'convert' into a shell script, set a bunch of environment variables, and then call that shell script instead of convert directly:
(convertWrapper.sh)
export MAGICK_HOME=/usr/local/ImageMagick-6.7.5
export MAGICK_CONFIGURE_PATH=${MAGICK_HOME}/etc/ImageMagick:${MAGICK_HOME}/share/doc/ImageMagick/www/source
export PATH=${PATH}:${MAGICK_HOME}/bin
export LD_LIBRARY_PATH=${MAGICK_HOME}/lib:${LD_LIBRARY_PATH}
export DYLD_LIBRARY_PATH=${DYLD_LIBRARY_PATH}:${MAGICK_HOME}/lib
export MAGICK_TMPDIR=/private/tmp
echo "$#" >> /private/tmp/m.log 2>&1
/usr/local/ImageMagick-6.7.5/bin/convert -verbose "$#" >> /private/tmp/m.log 2>&1
(convertWrapper.sh)
Additionally, the convert call was doing some rather complicated stuff, so I added the parameter '-respect-parenthesis' (which may or may not have had an effect).
I am not sure how much of the environment variable setting is needed as I was stabbing in the dark for a while, but since this is only for my development box...
You need to work out what your PATH is set to when you run a command from Java. It must be different to the one you have when running from the terminal.
Are you running Grails (via Tomcat?) as a different user? It might have a different path to your normal user.
you might want to try one of the Image Plugins that are part of the grails ecosystem
http://www.grails.org/ImageTools+plugin
the grails path when the app is running in the server is probably different from running java from the command line
I do so:
Put "convert" file to /usr/bin
Then add to Config.groovy:
gk {
imageMagickPath = "/usr/bin/convert"
}
Then in my ImageService.groovy:
import org.springframework.web.context.request.RequestContextHolder as RCH
[..]
def grailsApplication = RCH.requestAttributes.servletContext.grailsApplication
def imPath = grailsApplication.config.gk.imageMagickPath
def command = imPath + " some_properties"
def proc = Runtime.getRuntime().exec(command)
So this way you get command like: /usr/bin/convert some_properties
And it works, but don't forget to put file "convert" to you location and use it with this location.

How can I automate an existing instance of Internet Explorer using Perl?

I am struggling to get control of an IE preview control which is 'Internet Explorer_Server' class on an external windows application with perl.
Internet Explorer_Server is the class name of the window, I've found it with Spy++. and here’s my assertion code of it
$className = Win32::GUI::GetClassName($window);
if ($className eq "Internet Explorer_Server") {
...
}
I can get a handle of that 'Internet Explorer_Server' with Win32::GUI::GetWindow, but have no idea what to do next.
Updated: You are going down the wrong path. What you need is Win32::OLE.
#!/usr/bin/perl
use strict;
use warnings;
use Win32::OLE;
$Win32::OLE::Warn = 3;
my $shell = get_shell();
my $windows = $shell->Windows;
my $count = $windows->{Count};
for my $item ( 1 .. $count ) {
my $window = $windows->Item( $item );
my $doc = $window->{Document};
next unless $doc;
print $doc->{body}->innerHTML;
}
sub get_shell {
my $shell;
eval {
$shell = Win32::OLE->GetActiveObject('Shell.Application');
};
die "$#\n" if $#;
return $shell if defined $shell;
$shell = Win32::OLE->new('Shell.Application')
or die "Cannot get Shell.Application: ",
Win32::OLE->LastError, "\n";
}
__END__
So, this code finds a window with a Document property and prints the HTML. You will have to decide on what criteria you want to use to find the window you are interested in.
ShellWindows documentation.
You may want to have a look at Win32::IE::Mechanize. I am not sure whether you can control an existing IE window with this module, but accessing a single URL should be possible in about five lines of code.
Have you looked at Samie http://samie.sourceforge.net/ as this is a perl module to control IE

Resources