I'm trying to use a code generator inside a buildr-based java project. I would like to...
call the generator
compile the generated classes, package them
eclipse to see the generated stuff (i.e. have the .classpath contain those sources)
I've spend lots of time googling around for a complete example, but to no avail. the example here: https://cwiki.apache.org/confluence/display/BUILDR/How+to+generate+sources+before+compilation
gives be plenty of errors. this is what I've tried:
define 'generator' do
compile.with ALL_COMMON_MODULES
end
define 'extras' do
p 'calling generator..'
Java.classpath << ALL_COMMON_MODULES
Java.classpath << projects('generator')
Java.org.foo.generator.Main.main(['../modules/daos', 'target/generated-sources'])
sources = FileList[_("src/main/jeannie/*.*")]
generate = file(_("target/generated-sources") => sources).to_s do |dir|
puts 'generating...'
mkdir_p dir.to_s # ensure directory is created
end
compile.from generate
end
gives me an error like this:
RuntimeError: Circular dependency detected: TOP ...
so I'm obviously doing something very very wrong. I'd be very glad to see a working example or project that uses code generation.
I finally got it working with help from the buildr mailinglist. for anyone interested: The example mentioned here contains a problem. this:
compile.from generate
should be:
compile.from generate.to_s
Now it works beautifully! buildr also automatically extends the .classpath for eclipse (and idea, in case you use that) if 'compile.from' points to another location.
You cause the circular dependency by calling the parent project in your 'extras' project.
At that line: Java.classpath << projects('generator')
Maybe you should put everything on the same level, which is also what the example shows.
In this scenario your "generate" reference wouldn't be available either as it is in the 'extras' scope.
Something like this (untested):
define "generator" do
generate = file(_("target/generated-sources") => sources).to_s do |dir|
p 'calling generator..'
Java.classpath << ALL_COMMON_MODULES
Java.classpath << projects('generator')
Java.org.foo.generator.Main.main(['../modules/daos', 'target/generated-sources'])
sources = FileList[_("src/main/jeannie/*.*")]
puts 'generating...'
mkdir_p dir.to_s # ensure directory is created
end
compile.with ALL
compile.from generate
end
Related
I know there is a Move module (smart contract) on chain with a function that looks like this:
public entry fun do_nothing() {}
I know it is deployed at 6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7::other::do_nothing, you can see the module in the explorer here.
I have a Move module of my own that looks like this.
Move.toml:
[package]
name = 'mine'
version = '1.0.0'
[dependencies.AptosFramework]
git = 'https://github.com/aptos-labs/aptos-core.git'
rev = 'main'
subdir = 'aptos-move/framework/aptos-framework'
[addresses]
my_addr = "81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e"
other_addr = "6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7"
sources/mine.move:
module my_addr::mine {
use other_addr::other::do_nothing;
public entry fun do_stuff() {
do_nothing();
}
}
As you can see, I'm telling the compiler where the other module is by setting other_addr = "6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7". However, when I try to compile my Move module, it fails, saying "unbound module", meaning it doesn't know what the "other" module is.
$ aptos move compile --named-addresses my_addr="`yq .profiles.default.account < .aptos/config.yaml`"
Compiling, may take a little while to download git dependencies...
INCLUDING DEPENDENCY AptosFramework
INCLUDING DEPENDENCY AptosStdlib
INCLUDING DEPENDENCY MoveStdlib
BUILDING mine
error[E03002]: unbound module
┌─ /Users/dport/github/move-examples/call_other_module/mine/sources/mine.move:2:9
│
2 │ use other_addr::other::do_nothing;
│ ^^^^^^^^^^^^^^^^^ Invalid 'use'. Unbound module: '(other_addr=0x6286DFD5E2778EC069D5906CD774EFDBA93AB2BEC71550FA69363482FBD814E7)::other'
error[E03005]: unbound unscoped name
┌─ /Users/dport/github/move-examples/call_other_module/mine/sources/mine.move:5:9
│
5 │ do_nothing();
│ ^^^^^^^^^^ Unbound function 'do_nothing' in current scope
{
"Error": "Move compilation failed: Compilation error"
}
Why is compilation failing? Why can't the compiler figure it out for me based on the ABIs of the Move modules it finds at other_addr on chain?
The problem
In order to publish a Move module that calls a function in another Move module, you need its source code. This is true of all Move modules, not just your own. You'll notice in Move.toml there is already a dependency on AptosFramework. This is what allows you to call all the framework functions, e.g. those related to coins, tokens, signer, timestamps, etc.
So to make this work, you need to have access to the source.
Source: Git Dependency
If you have access to the source in another git repository, you can tell the compiler where to find the other module by adding this to your Move.toml:
[dependencies.other]
git = 'https://github.com/banool/move-examples.git'
rev = 'main'
subdir = 'call_other_module/other'
This is telling the compiler, "the source code for other can be found in the call_other_module/other/ directory at that git repo".
Source: Local
If you have the source code locally, you can do this instead:
[dependencies.other]
local = "../other"
Where the argument for local is the path to the source code.
Source: I don't have it?
If you don't have the source, you can try to download it. By default, when someone publishes a Move module, they include the source code alongside it.
First try to download the code:
cd /tmp
aptos move download --account 6286dfd5e2778ec069d5906cd774efdba93ab2bec71550fa69363482fbd814e7 --package other
If the source code was indeed deployed on chain, you should see this:
Saved package with 1 module(s) to `/tmp/other`
{
"Result": "Download succeeded"
}
Inside /tmp/other you'll find the full source, including Move.toml and sources/.
From here, you can just follow the steps for Source: Local above.
Note: The value for --package should match the name field in Move.toml of the deployed code. More to come on how to determine this based on on-chain data.
Source: The download failed?
If you ran aptos move download and saw this:
module without code: other
Saved package with 1 module(s) to `/private/tmp/other_code/other`
{
"Result": "Download succeeded"
}
You'll find that sources/other.move is empty.
This means the author published the code with this CLI argument set:
--included-artifacts none
Meaning they purposely chose not to include the source on chain.
Unfortunately at this point you're out of luck right now. It is a hard requirement of compilation that if you want to call a function in another Move module, you must have the source for that module. There is work in the pipeline that should enable decompilation of Move bytecode, but that's not ready yet.
I hope this helps, happy coding!!
The code used in this answer can be found here: https://github.com/banool/move-examples/tree/main/call_other_module.
What is the recommended way to include all the files for your project?
I have seen many examples that use a similar structure: an ordered list of include statements in a single top-level file (the file that defines a Module, or the "main" file in an application).
This also seemed to be the conclusion in the following thread:
https://discourse.julialang.org/t/best-practices-for-structuring-larger-projects/2652?u=nhdaly
However, Lint.jl seems to be very unhappy with this sort of project structure, complaining about the persistent use of undeclared symbols.
Consider the following example:
# --- src/lib1.jl ---
struct MyStruct
x::Int32
end
# --- src/lib2.jl ---
foo() = MyStruct(3) # Lint.jl says: E321 MyStruct: use of undeclared symbol
# --- src/MyPackage.jl ---
module MyPackage
include("lib1.jl")
include("lib2.jl")
foo()
end
Of course, this code will work correctly, because MyStruct will be available before lib2.jl is compiled. But Lint.jl cannot know that. Even when using the Atom-integrated linter, it shows the same error.
So is it preferable to have lib2.jl include("lib1.jl") instead of all the includes in the top file? Or is it best to put them in both places, as you might for C++ headers? I haven't seen a definitive recommendation for this anywhere, and I'd love if someone could provide one! :)
EDIT: changed file names to reflect a more typical julia directory structure.
I usually do a:
PkgDev.generate("MyPackage", "MIT")
This generates the whole folder structure. I put my Julia(.jl) files in the src folder and use 'using' to import functions in other Julia files in the same folder.
I'm trying to make a top-level zip in my buildr buildfile, but the jar files don't seem to end up in the resulting zip.
The project structure has sub-projects:
desc 'Main project'
define 'hex' do
#... omitting top-level config ...
desc 'Hex Utilities'
define 'util' do
package :jar
end
#... omitting other sub-projects, they're all the same ...
binaries_id = "#{id}-components-#{version}"
package(:zip, :file => _("target/#{binaries_id}.zip")).path(binaries_id).tap do |path|
path.include 'COPYING', 'COPYING.LESSER', 'CHANGELOG', 'README.markdown'
%w{anno binary interpreter util viewer}.each do |proj|
path.include project(proj).packages
end
end
end
I have also tried:
path.include project(proj).package(:jar)
explicitly selecting only the jar, though it's the only package anyway, and also:
path.include project(proj).packages.map{|p| "#{p}"}
which maps them to strings, since I noticed all the other path.include lines take strings and suspected that it simply might not work with arrays of package objects, but this doesn't work either. Then my last thought was to just jam in the strings like what I am doing with the rest of the includes:
path.include "#{proj}/target/hex-#{proj}-#{version}.jar"
But even this doesn't work, so I have no idea how to proceed.
I can not seem to reproduce the behaviour. With the most recent release of buildr I created a buildfile that looks like
define 'foo' do
project.version = '1.0'
define 'bar' do
package :jar
end
package(:zip, :file => _("somepath.zip")).path('somepath').tap do |path|
path.include project('bar').packages
end
end
Run "buildr clean package" and then "unzip -t somepath.zip" produces;
Archive: somepath.zip
testing: somepath/ OK
testing: somepath/foo-bar-1.0.jar OK
No errors detected in compressed data of somepath.zip.
Which is what I would expect.
I am having trouble import all of the .rb files I need from a given directory, I know this question has been asked a lot but none of the earlier posts seem to solve my problem.
Here is my directory structure:
- Docs
- Lexer
- Parser
--> Parser.rb
- SyntaxTree
--> I want all the .rb files from here
--> Sets.rb (Module I want to import)
--> EMPTY_SET (constant I want to reference)
- Test
<Main Program>
I am currently working in Parser.rb and I need to get all of the .rb files from the SyntaxTree directory so I can reference them inside of Parser.rb. I also want to import the Sets module that is contained in Sets.rb.
Here is my current way of trying to import all of the ruby files that I saw in most of the previous posts:
Dir['../SyntaxTree/*.rb'].each {|file| require file}
I also tried:
Dir['../SyntaxTree/*.rb'].each {|file| require_relative file}
With either of these method I still get errors when trying to include the Sets module:
class Parser
include Sets
.
.
end
`<class:Parser>': uninitialized constant Parser::Sets (NameError)
If I directly include Sets.rb the error goes away
Any ideas would be very helpful.
The best approach is to set up proper autoload dependencies within your modules so that the required classes are loaded on demand. If you simply load all of the files in using require, it won't be obvious which order they have to be loaded based on filenames and any inter-dependenceies are not resolved.
This is why having a top-level namespace for your application can help. Example:
module MyApp
autoload(:Parser, 'my_app/parser')
autoload(:Sets, 'my_app/sets')
end
Now those classes should load automatically on-demand.
The profiles file is a YAML file called profiles.yaml that you place
in the same directory as the Buildfile
I was wondering is it possible to share the same profiles file for more than one project, i.e. specifying a different location to where to find the profiles.yaml file as opposed to defaulting to the same directory as the build file?
I'm afraid I don't have a satisfactory answer.
You could use git-submodules, braid, svn externals (or a plain old symbolic link) to do the trick.
Looking at the code, we don't permit using a different path for the profiles file at this point.
If you would like this behavior to change, I would suggest you enter an enhancement request for Buildr and consider providing a patch.
The solution I ended up with was to load my own profiles.yaml file:
require 'yaml'
def profile
profiles = YAML.load_file('../../profiles.yaml')
profiles[ENV['BUILDR_ENV']] ||= {}
end
puts "env:" << profile['propname']