This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Import AppleScript methods in another AppleScript?
Is there anything in AppleScript that can be used like the #include directive in C?
For instance:
INCLUDE_DIRECTIVE "Path/To/Applescript.scpt"
//Some AppleScript code here
Absolutely you can do this, and there are two variations. The first loads the entire script:
Script Foo.scpt
set theBar to "path:to:Bar.scpt" as alias
run script (theBar)
Script Bar.scpt
display dialog "Bar"
--Result: A window that displays "Bar"
The second allows you load a script and call specific methods within that script:
Foo.scpt
property OopLib : load script POSIX file "/Users/philipr/Desktop/OopLib.app"
tell OopLib
set theResult to Oop(1)
display dialog theResult
end tell
--> result: Window displaying "Eek: 1"
OopLib.scpt
on Oop(Eek)
display dialog Eek
return "Eek: " & Eek
end Oop
Use something like this to load the script
set scriptLibraryPath to (path to scripts folder from user domain as text) & "myScript.scpt"
set scriptLibrary to load script scriptLibraryPath as alias
Then to access a subroutine in that script do this...
set myValue to someMethod() of scriptLibrary
To add to what other posters have said, load script is the only built-in option; it's very primitive, but may be sufficient if your needs are modest.
Late Night Software's Script Debugger editor provides an #include-style library mechanism that can merge multiple AppleScript files when compiling a script. The downside of Script Debugger is that it's a couple hundred bucks to buy, though many regular AppleScript users will tell you it's well worth the investment.
There are a couple of third-party module loaders, Loader and ModuleLoader, that implement more sophisticated import mechanisms on top of the basic load script command, and are worth looking into if your requirements are more complex. I've not used ModuleLoader, but Loader (which I wrote) can import modules at compile- or run-time from various standard and user-specified locations, and will automatically resolve complex (even circular) dependencies between modules.
The downsides of Loader and ModuleLoader is that they rely on scripting additions to do some of the heavy lifting, which might be an issue when distributing scripts (in Loader's case, the osax is only needed to compile scripts, not to run them), plus you need to add some boilerplate code to your script to perform the actual import.
Related
I have a lot of old Perl code that gets called frequently, I have been writing a new module and all of a sudden I'm getting a lot of warnings in my error_log for Apache, they are for every module currently being used. e.g,
"my" variable $variable masks earlier declaration in same statement at
/path/to/module.pm line 40 (#1)
Useless use of hash element in void context at
/path/to/another/module.pm line 212 (#2)
The main layout of the codebase is one giant script that includes the modules and directs the requests to them needed to create certain pages for the website and the main script then handles static elements like menus.
My current project is separated from this main script and doesn't use it however any time I call my code using ajax, there are some other ajax calls that will use the main script and the warnings only seem to appear from those request but only when I'm calling my project.
I have grepped every module and none of them have use warnings (or -w) in them, I have also tried using no warnings 'all' in the main script and my own project but it's not doing anything.
At this point I'm out of ideas on what to do next so all help is appreciated, I'd just like to suppress the warnings, the codebase is quite old and poorly written so going and correcting each issue that causes the warns in the first place isn't do-able.
The Apache server is running mod_perl as well, if that might make a difference I have a feeling it might be something to do with CGI, but I can't seem to find any evidence.
I take it that the code gets called by running certain top-level Perl script(s).
Then use the __WARN__ hook in those script(s) to stop printing of warnings
BEGIN { $SIG{__WARN__} = sub {} };
Place this BEGIN block before the use statements so to affect modules as well.
An empty subroutine is the way to mute warnings since __WARN__ doesn't support 'IGNORE'.
See warn and %SIG in perlvar.
See this post and this post for comments and some examples.
To investigate further and track the warnings you can use Carp
BEGIN {
$SIG{__WARN__} = \&Carp::cluck; # or Carp::confess; to also die
}
which will make it print full stack traces. This can be fine-tuned as you please since we can write our own sub to be called. Or use Carp::Always.
See this post
for some more drastic measures (like overriding CORE::GLOBAL::warn)
Once you find a more precise level at which to suppress warnings then local $SIG{__WARN__} is the way to go, if possible. This is used in a post linked above, and here is another example. It is of course far better to suppress warnings only where needed instead of everywhere.
More detail
Getting stack traces in Perl?
How can I get a call stack listing in Perl?
Note that longmess is unfortunately no longer so standard and well supported.
How can I include the procedures from one Netlogo file into another? Basically, I want to separate the code of a genetic algorithm from my (quite complicated) fitness function, but, obviously, I want the fitness reporter, which will reside in "fitness.nlogo", to be available in the genetic algorithm code, probably "genetic.nlogo".
If it can be done, how are the procedures imported, and the code executed? Is it like Python, where importing a module pretty much executes everything in the module, or like C/C++, where the file is blindly "joined"?
This may be a stupid question, but I couldn't find anything on Google. The Netlogo documentation says something about __includes, an experimental keyword that may do the trick, but there's not much explained there. No example either.
Any hints? Should I go with __includes? How does it work?
To include a file you use
__includes["libfile.nls"]
After adding this and pressing the “Check” button, a new button will appear next to the Procedures drop-down menu. There you can create and manage multiple source files.
The libfile.nls is just a text file that contains NetLogo code. It is not a netlogo model, which always end in .nlogo, as a NetLogo model contains a lot of other information besides the NetLogo code.
Including a file is the equivalent of just inserting all its contents at that point. In order to make it work in a way like reusable library files, one should create procedures which use agentsets and parameters as input variables to be independent of global definitions or interface settings.
The feature is documented in the NetLogo User Manual at http://ccl.northwestern.edu/netlogo/docs/programming.html#includes.
You can create a file libfile.nls and in the same folder create your main model model.nlogo.
After that, go to your model.nlogo and write:
__includes["libfile.nls"]
This file contains your reporters and procedures that you can call in your model.
I often use Sweave to produce LaTeX documents where certain chunks are produced dynamically by executing R code. This works well - but is it also possible to have code chunks that are executed in different ways, e.g. by executing the code in the shell, or by running Perl, and so on? It would be helpful to be able to mix things up, so I could do things like run some shell commands to fetch some data, run some perl commands to pre-process it, and then run R commands to analyze it.
Of course I could use all R chunks and use system() as a poor-man's substitute, but that doesn't make for very pleasant reading in the document.
The new new thing (for multi-language, multi-format) docs may be dexy.it which for example these guys at opengamma.org use as the backend.
Ana, who is behind dexy, is also giving a lot of talks about it so also look at the dexy blog.
It's not directly related to Sweave, but org-babel, which is part of Emacs org-mode, allows to mix code chunks of different languages in one file, pass data from one chunk to another, execute them, and generate LaTeX or HTML export from the output.
You can find more informations about org-mode here :
http://www.orgmode.org/
And to see how org-babel works :
http://orgmode.org/worg/org-contrib/babel/
There is certainly no easy way to do this other than through either foreign language interfaces from R (maybe through inline if it's supported), or system(). For what it's worth, I would just use system(); that should be easy enough.
You can see this previous question about having a Sweave equivalent for Python, where one of the respondents actually creates a separate interface. This can give you a sense what what it would take to embed other languages which may not already be supported. At a minimum, you have to do major hacking on the Sweave driver.
Do you know emacs" org-mode and, more specifically, Babel? If you already know Emacs or are willing to switch to Emacs, then org-mode and Babel are the answer to your question(s).
For instance, I am currently working on a document which contains some shell-scripts, does computations with R and creates flow charts with dot (graphviz). Org-mode can export a variety of formats, e.g. LaTeX (that's what I use).
There is the StatWeave project which uses java rather than R to do the weaving, but will run multiple programs instead of just R. I don't know how hard it would be to get it to do Perl or other programs like that, but the homepage indicates that it already works with R, SAS, Stata, and others:
http://www.cs.uiowa.edu/~rlenth/StatWeave/
I need to write some scripts for WinXP to support some of the analysts here at Big Financial Corp. I need to decide which type of Windows scripting best fits my needs.
My needs seem pretty simple (to me anyway)
run on WinXP Pro SP2 (version 2002)
not require my users to install anything (so PowerShell is out. Likewise Perl, Python, and other common suggestions for these types of questions on Stack Overflow)
written in a non-compiled language (so users have a chance to modify them in the future)
reasonably complete language features (especially date/time manipulation functions. I would like to also have modern concepts like subroutines, recursion, etc)
ability to launch and control other programs (at the command line)
From my hurried review of my options, it looks like my choices are
VBScript
WScript
JScript
I don't have time to learn or do an in-depth review of these (or whatever else a standard install of WinXP has available). I need to pick on and hack something together as quickly as possible.
(Current crisis is the need to run a given application, passing several date parameters).
Once the current crisis is over, there will be more requests like this.
Edit
My current skill set includes Perl, JavaScript, and Java so I'm most comfortable using something similar to these
Edit
OK. I'll try writing a WSH file in JScript. I'll let you know how it goes (and figure out accepting an answer) once things settle down around here a bit.
Edit
It all worked out in the end. Thanks for the quick responses folks. Here's what I gave my user:
<job id="main">
<script language="JScript">
// ----- Do not change anything above this line ----- //
var template = "c:\\path\\to\\program -##PARAM## --start ##date1## --end ##date2## --output F:\\path\\to\\whereever\\ouput_file_##date1##.mdb";
// Handle dates
// first, figure out what they should be
dt = new Date();
var date1 = stringFromDate(dt, 1);
var date2 = stringFromDate(dt, 2);
// then insert them into the template
template = template.replace(new RegExp("##date1##", "g"), date1);
template = template.replace(new RegExp("##date2##", "g"), date2);
// This application needs to run twice, the only difference is a single parameter
var params = ["r", "i"]; // here are the params.
// set up a shell object to run the command for us
var shellObj = new ActiveXObject("WScript.Shell");
// now run the program once for each of the above parameters
for ( var index in params )
{
var runString = template; // set up the string we'll pass to the wondows console
runString = runString.replace(new RegExp("##PARAM##", "g"), params[index]); // replace the parameter
WScript.Echo(runString);
var execObj = shellObj.Exec( runString );
while( execObj.Status == 0 )
{
WScript.Sleep(1000); //time in milliseconds
}
WScript.Echo("Finished with status: " + execObj.Status + "\n");
}
// ----- supporting functions ----- //
// Given a date, return a string of that date in the format yyyy-m-d
// If given an offset, it first adjusts the date by that number of days
function stringFromDate(dateObj, offsetDays){
if (typeof(offsetDays) == "undefined"){
offsetDays = 0;
}
dateObj.setDate( dateObj.getDate() + offsetDays );
var s = dateObj.getYear() + "-"; //Year
s += (dateObj.getMonth() + 1) + "-"; //Month (zero-based)
s += dateObj.getDate(); //Day
return(s);
}
// ----- Do not change anything below this line ----- //
</script>
</job>
Clearly it could be better... but it got the job done and is easy enough for my user to understand and extend himself.
These are all technically the same thing with different syntax. Actually WScript/CScript is the engine, VBScript and JScript are the languages.
Personal opinion only follows: My personal recommendation is JScript because it reminds me more of a real programming language, and makes me want to punch myself in the face less often than VBScript. And given your familiarity with javascript, your best bet is JScript.
Going into a bit more detail about the difference between WScript and CScript as others have: these are your execution platforms for your scripts for the Windows Script Host. They are essentially the same thing, whereas WScript is more GUI oriented, and CScript is more console oriented. If you start the script with CScript, you will see a console window, but you still have access to GUI functionality, whereas if you start with WScript, there is no console window, and many of the default output methods display as windowed objects rather than a line in the console.
If you like JavaScript, you'll probably be ok with JScript. It's a decent language, and certainly more suitable for complex scripts than VBScript.
However, Microsoft1 hates JavaScript, so you'll encounter some APIs that are trivial to use with VBScript but painful to access using JScript. Consider yourself warned...
As snicker notes, WScript is the engine that drives both.
1 Anthropomorphization used to note general lack-luster support; not to be interpreted as evidence of any official policy.
Although JScript is a much less horrible language than VB Script, the problem is that VB Script has a more complete library of helpful functions built into it for things like date and number formatting. So it's not actually as easy a choice as it first appears, unless you are able to write and install your own library of helper objects to use with JScript.
Some details here.
Don't forget CScript. And be careful here, because the windows scripting host is often disabled by group policy at large companies. If that's the case, the only option that fits all your criteria is (shudder) batch.
If none of those work out for you, your best option is probably a compiled program where you distribute the source with the program.
Use JScript. A key difference between using JScript with the WScript/cscript engine and writing JavaScript in the browser is that you don't have the browser security restrictions. You also have access to ActiveX/COM objects for manipulating applications, the registry, etc. In fact, you'll probably spend a lot more time reading up on those custom objects and interfaces than worrying about the language features. Of course, you get all the benefits of JavaScript dates, regex's, etc.
A sample JScript app from MSDN might help to get you started.
Unfortunately, most of Microsoft's sample scripts tend to be VBScript, but the syntax is pretty easy to follow, especially if you're just trying to pick out COM interface details.
To expand on the other's answers a bit, WScript and CScript are programs used to run scripts written in VBScript (Visual Basic like syntax) or JScript (JavaScript-like syntax). CScript runs scripts from a console window, so that the Echo command writes to the console, whereas WScript runs them without a window and the Echo command writes to a popup message box.
You can write WSH (Windows Scripting Host) and WSC (Windows Scripting Component) scripts that use both VBScript and JScript by combining them in an XML-based wrapper, if you need to merge pre-existing code in the two languages.
You can also write HTA scripts, which stands for "HyperText Application". These are script code in an HTML file with the HTA extension that runs in Internet Explorer, which allows you to provide an interface using HTML controls, but also have complete access to your system because the run locally.
All of these are based on the Windows Scripting Host and Active Scripting technologies which have been included with all Windows computers since Windows 98. Along with fairly powerful base languages, they also give you access to WMI for extensive system and network information and management, and COM capability for automating Word, Excel etc. Also you can use ADO and ADOX to create and work with Access MDB files even if Access is not installed.
My choice would be WSH using JScript. You could use VBScript, but why, when JScript is available.
Here is a reference for Windows Script Host.
Do you know if there's any tool for compiling bash scripts?
It doesn't matter if that tool is just a translator (for example, something that converts a bash script to a C program), as long as the translated result can be compiled.
I'm looking for something like shc (it's just an example -- I know that shc doesn't work as a compiler). Are there any other similar tools?
A Google search brings up CCsh, but it will set you back $50 per machine for a license.
The documentation says that CCsh compiles Bourne Shell (not bash ...) scripts to C code and that it understands how to replicate the functionality of 50 odd standard commands avoiding the need to fork them.
But CCsh is not open source, so if it doesn't do what you need (or expect) you won't be able to look at the source code to figure out why.
I don't think you're going to find anything, because you can't really "compile" a shell script. You could write a simple script that converts all lines to calls to system(3), then "compile" that as a C program, but this wouldn't have a major performance boost over anything you're currently using, and might not handle variables correctly. Don't do this.
The problem with "compiling" a shell script is that shell scripts just call external programs.
In theory you could actually get a good performance boost.
Think of all the
if [ x"$MYVAR" == x"TheResult" ]; then echo "TheResult Happened" fi
(note invocation of test, then echo, as well as the interpreting needed to be done.)
which could be replaced by
if ( !strcmp(myvar, "TheResult") ) printf("TheResult Happened");
In C: no process launching, no having to do path searching. Lots of goodness.