Do all Ruby interpreters follow the same Ruby syntax defined in www.ruby-lang.org?
MRI
YARV
Rubinius
JRuby
IronRuby
Cause it is the interpreter that defines the Ruby language. Does that mean one interpreter could add a feature/syntatic sugar that other interpreters haven't?
If that is the case, do all interpreters have their own API documentation?
Cause I'm using: http://ruby-doc.org/ruby-1.9/index.html.
Which interpreters are implementing that one?
Could someone shed a light on this topic.
Thanks!
Do all Ruby interpreters follow the same Ruby syntax defined in www.ruby-lang.org?
Yes, they all use the same syntax. In fact, they actually all use the same parser, or at least a parser that was automatically generated from the same source file.
Cause I'm using: http://ruby-doc.org/ruby-1.9/index.html.
Which interpreters are implementing that one?
At the moment, the only production-ready Ruby execution engine that implements Ruby 1.9 fully is YARV.
JRuby itself is production-ready, and it implements both Ruby 1.8.7 and Ruby 1.9.2, but the Ruby 1.9.2 implementation is not yet complete. IronRuby and Rubinius are also working on implementations of Ruby 1.9.2. MacRuby has a fairly complete Ruby 1.9 implementation, but it is still far from a 1.0 release. MRI doesn't implement Ruby 1.9 and probably never will.
But I don't understand why you are so concerned about the syntax. Syntax differences are trivial to spot: if there were a difference in the syntax, the engine would simply refuse to parse your file and you would know immediately that there is something wrong. Differences in semantics on the other hand are much more dangerous.
Which bit of "syntactic sugar" are you referring to?
Keep in mind that ruby has a very small set of keywords. A lot of stuff that seems to be a keyword at first is actually implemented by Kernel (eg require, puts, and raise).
http://apidock.com/ruby/Kernel
Related
Up till now, I've been writing all my scripts in POSIX shell so that I can run them on all the machines I use (macOS, various Linuxes, FreeBSD). I like to have a unified experience where I can pull down my dotfiles from git, and have all my settings and scripts ready to go.
However, POSIX shell is a really irritating language, and I've been considering switching to Ruby for my scripting needs. Unfortunately before I even got started, I realized that macOS ships with Ruby 2.3.7, and one of the machines I often need to use only has Ruby 1.8.7. It's a school machine for which I do not have sudo access, so I can't install a later version of Ruby.
TL;DR
Is it possible to write Ruby scripts that will run on both 1.8.7 and 2.3.7 (and hopefully everything in between)? Is there a set of guidelines I can follow that will help me avoid incompatible language features?
Yes, it is possible, provided you restrict yourself to the intersection of the language and library features of Ruby 1.8.7, Ruby 2.3.7, and everything in between.
The last big break of backwards-compatibility was the jump from Ruby 1.8 to Ruby 1.9. String handling was completely changed, so anything to do with text processing, you need to be very careful.
Of the top of my head:
In Ruby 1.8, strings are considered to be a sequence of bytes. In Ruby 1.9. strings are a factory for different iterators, iterating in terms of code points, characters, or bytes.
Therefore, in Ruby 1.9, strings are no longer Enumerable.
Indexing into a string will return an Integer in Ruby 1.8 and a single-character String in Ruby 1.9.
Character literals (e.g. ?a) will evaluate to an Integer in Ruby 1.8 and a single-character String in Ruby 1.9.
In Ruby 1.8, all strings are assumed to be in the same encoding (usually ASCII, but can be changed to a very limited set, namely UTF-8 and a couple of Asian encodings). Ruby 1.9 is fully encoding-aware, each string has its own separate encoding, each I/O stream has two encodings (an internal one and an external one) and each source file has a separate encoding.
Your best bet might be to restrict yourself to use only what is specified in the ISO Ruby Language Specification. The spec was specifically written so that all existing Ruby implementations at the time (MRI, YARV, IronRuby, JRuby, MacRuby, Rubinius) would automatically be compliant, which basically means that the spec only specifies the minimal required subset of the intersection of Ruby 1.8.6, Ruby 1.8.7, and Ruby 1.9.0.
Since there haven't been any backwards-incompatible changes since then, you can be assured that your code will work on all versions from Ruby 1.8.6 up to (but excluding) 3.0, and on all ISO-compliant Ruby implementations, which are as far as I know MRI, YARV, JRuby, IronRuby, Rubinius, MacRuby, MRuby, MagLev, and TruffleRuby.
I'm developing a "Rails-less" Ruby daemon for automation (although in theory it runs within a Rails directory). For general purposes and the principle of the matter, I would like to find the (most) "native"/common way to utilize a Ruby version of .present?/.blank?/.empty?/.nil? to identify if an array or a (hash) value exists and is not empty (i.e., [] or {}).
From what I've read (e.g., Stack Overflow) and tested, all of these functions appear to be Rails-specific methods, part of ActiveSupport(?).
Coming from other web/interpreter languages (PHP, Python, JS, etc.), this is a general logic function most languages (with arrays, which are most) have this functionality built in one way or another (e.g., PHP's isset( ... ) or JavaScript's .length).
I understand there are RegEx workarounds for .blank?, but .present? seems it would require exception handling to identify if it's "present"). I'm having a hard time believing it doesn't exist, but there's little talk about Ruby without Rails' involvement.
Active Support is broken in small pieces so that you can load just what you need. For .blank? and .present? methods it would be enough to require:
require 'active_support/core_ext/object/blank.rb'
As docs say.
Object#nil? , Array#empty? and Hash#empty? already defined so you dont need anything to require to use those.
Make sure active_support gem installed in your system
You can use ActiveSupport without including all rails in your app, that's actually quite common.
nil? and empty? are defined in the standard library.
E.g., String#empty? is simply testing if the length is 0.
To use active support, just install the gem or add it to your gemfile then:
require 'active_support'
The documentation also states you can cherry pick the core extensions you want:
require 'active_support/core_ext/object/blank'
According to the official ruby About page it's easier to extend Ruby with C than Perl. I'm not a (perl) XS guy, but I find it dirt simple to write something quick and simple with Inline::C, so why is it easier in Ruby?
Writing C extensions in Ruby is easier than in Perl or Python, with a very elegant API for calling Ruby from C. This includes calls for embedding Ruby in software, for use as a scripting language. A SWIG interface is also available.
Any further explanation from those that do more C extensions would be useful.
(Full disclosure, I am a Perl programmer)
The Ruby C API certainly looks much nicer than Perl's. It looks like a regular C library with functions that correspond to Ruby code. Perl's API is a mess of macros within macros within macros and magic threading flags. Using the Perl API outside of the Perl core is certainly a secondary concern. Ruby definitely wins on not being bowel clenchingly terrifying.
While Ruby has a better C API, Perl has the better tutorials on how to do anything with it. The generated Ruby documentation lacks any sort of cohesive tutorial or often any descriptive text at all. It's possible I'm looking in the wrong place but that's all that was offered. In contrast, the Perl API documentation is hand written prose with useful information about what each function does. In addition, there's over a dozen documents in the core docs about using Perl and C. I'd say Perl wins on docs.
FFI looks quite impressive. The closest thing Perl has to FFI is Inline::C which is a wrapper around the mess of XS. It's primary use is to inline C code into your Perl program, but you can also use it to access C library functions.
Here's a trivial example similar to nash's getpid example.
use Inline
C => Config =>
ENABLE => "AUTOWRAP";
use Inline C => q{ int getpid(); };
print getpid();
Now, I am cheating because technically getpid returns pid_t on my system, but that's just an integer. FFI seems to have an awful lot of special cased code for getpid, so I suspect it's ease of use will correspond directly to whether FFI has already taken care of it. Trivial examples are trivial. It would be interesting to see what happens when typical complications arise, such as functions that return pre-allocated memory and have odd types and throw around structs.
While FFI and Inline::C can be used to do the same thing, how they do it looks very, very different. Inline::C is actually compiling and caching C code. FFI is somehow not doing any compiling. I'm not sure if that's really for real, or if the compilation is done for you at install time for common libraries.
In addition, FFI smooths the portability problems across the various Ruby implementations and their different ways of calling native APIs. This is something Inline::C doesn't have to do, and quite frankly it's amazing if it really works. One benefit is the FFI interface is much smoother than Inline::C. With Inline::C, it's very clear that you're writing a wrapper around a C compiler.
With FFI it's very easy to extend Ruby with C. This is an example from github
require 'rubygems'
require 'ffi'
module Foo
extend FFI::Library
ffi_lib FFI::Library::LIBC
attach_function :getpid, [ ], :int
end
puts "My pid=#{Foo.getpid}"
You don’t need a compiler installed on
your system to be able to run FFI
extensions. On linux, you also do not
need to install the development
versions of libraries, just the
runtime versions. Of course, the
libraries you link against will need
to have been compiled at some point,
but odds are you won’t have had to do
it.
https://github.com/ffi/ffi/wiki/why-use-ffi
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What is the difference between Ruby 1.8 and Ruby 1.9
I have found some differences in interpretation of global and local variables.
Can anyone point me to list of major differences?
These are probably the most important changes:
Ruby 1.9 changed from being
interpreted to being
bytecode-compiled (using the YARV
VM).
The String class has been redesigned
entirely to make it encoding-aware.
Regular expressions are now
implemented using the Oniguruma
engine, rather than the home-made one
used in ruby 1.8, enabling new
features like negative look-around.
The enumerator library from stdlib
has been added to core and most
Enumerable methods have been
changed to return an Enumerator
when invoked without a block.
Symbol#to_proc has been added.
There's a new syntax for lambdas,
-> which allows default arguments
and lambdas taking blocks.
There's a more complete list of changes here.
One major point might be that they use a different VM (at least, the 'standard' distributions do, obviously there are a number of options like MacRuby, IronRuby, etc). You might have a look here for details on all the changes.
As per this parsetree will not work in ruby 1.9 . I do not have computer science background but have been programming for a few years. Can someone explain in a little extra detail why parsetree will not work in ruby 1.9.
The ruby 1.8 interpreter constructed and then walked over an abstract syntax tree, whereas the ruby 1.9 interpreter is bytecode based. Therefore there is no AST available at runtime for parsetree to work with.
ngty has published a gem, called sourcify, that provides a lot of ParseTree's functionality in Ruby 1.9.
Since he couldn't use the AST, he wrote an sexp parser that works most of the time. Sometimes you have to give it some help, for example by telling it which Proc the code is attached to:
## The old ParseTree way
# proc.to_ruby
## The sourcify way - but may raise NoMatchingProcError or MultipleMatchingProcsPerLineError
# proc.to_source
## The sourcify way - giving :attached_to a symbol to help it find the correct Proc
proc.to_source :attached_to => :name_of_proc
Here's a longer explanation of how we replaced ParseTree when we migrated to Ruby 1.9.
ParseTree appears to be dead due to some changes internally to Ruby 1.9, as per this link:
http://blog.zenspider.com/2009/04/parsetree-eol.html
Though, it may be that some people will come up with some workarounds for part of it.