How to install a directory recursively with waf - installation

I currently use following valadoc build task to generate a api documentation for my vala application:
doc = bld.new_task_gen (
features = 'valadoc',
output_dir = '../doc/html',
package_name = bld.env['PACKAGE_NAME'],
package_version = bld.env['VERSION'],
packages = 'gtk+-3.0 gee-1.0 libxml-2.0 x11 gdk-x11-3.0 libpeas-gtk-1.0 libpeas-1.0 config xtst gdk-3.0',
vapi_dirs = '../vapi',
force = True)
path = bld.path.find_dir ('../src')
doc.files = path.ant_glob (incl='**/*.vala')
This tasks creates a directory html in the output directory including several subdirectories with html and picture files.
What I am know trying to do is to install such files to /usr/share/doc/projectname/html/. To do so I added the following to the wscript_build file (following the documentation I have found here):
output_dir = doc.bld.path.find_or_declare('../doc/html')
doc.outputs = output_dir.ant_glob (incl='**/*')
doc.bld.install_files('${PREFIX}/share/doc/projectname/html', doc.outputs)
However this leads to an error "Missing node signature". Does anyone know how to get around this error? Or is there a simple way to install a directory recursively with waf?
You can find a full-fledge sample here.

I had a similar issue with generated files and had to update the signature for the corresponding Node objects. Try creating a task:
def signature_task(task):
for x in task.generator.bld.path.find_dir('../doc/html').ant_glob('**/*', remove=False):
x.sig = Utils.h_file(x.abspath())
To the top of you build rule, try adding:
#Support running task groups serially
bld.post_mode = Build.POST_LAZY
Then at the end of your build, add:
#Previous tasks belong to a group
bld.add_group()
#This task runs last
bld(rule=signature_task, always=True, name="signature_task")

There is an easier way using relative_trick.
bld.install_files(destination,
bld.path.ant_glob('../doc/html/**'),
cwd=bld.path.find_dir('../doc/html'),
relative_trick=True)
This gets a list of files from the glob, chops off the prefix, and puts it into the destination.

Related

Add shell script output value in a Jenkins ArrayList variable

def path = ....
def files = []
sh "for file in $path/*.json; do files.add(file); done"
echo ${files}
Error I get in jenkins: /jenkins/workspace/....."syntax error near unexpected token 'file'
Can someone help me as to how can I add file in files? I tried looking for answers but couldn't find anything useful which solved my scenario.
I want to add the file variable inside Arraylist variable files so that I can fire curl command for each file in my Jenkins pipeline.
Also needed to know is there some way I can test the script before deploying it on any environment?
If you want to get a List JSON files in a directory as a List you can use the following code.
path = "/home/your/path"
dir(path) {
def files = findFiles(glob: '**/*.json')
println files.size()
println files[0].name
}

Is there a way to change the working directory of fiddle?

I'm trying to load a C shared library within Ruby using Fiddle.
Here is a minimal example:
require 'fiddle'
require 'fiddle/import'
module Era
extend Fiddle::Importer
dlload './ServerApi.so'
extern 'int era_init_lib()'
extern 'void era_deinit_lib()'
extern 'int era_process_request(const char* request, char** response)'
extern 'void era_free(char* response)'
end
Era.era_init_lib
begin
# ...
ensure
Era.era_deinit_lib
end
The shared library loads without issues. However when I call Era.era_init_lib it tries to load additional libraries (Network.so and Protobuf.so). I have these file located in the current working directory (in the same directory as ServerApi.so).
However when I try to execute the code above I receive the following error:
! Failed to load library: /home/username/.rvm/rubies/ruby-2.6.5/bin/Network.so, error: /home/username/.rvm/rubies/ruby-2.6.5/bin/Network.so: cannot open shared object file: No such file or directory
If I place the file at the location the error describes everything works fine.
My guess is that the C working directory of fiddle is different from the Ruby working directory. I would like to keep the project files within the project and not in the Ruby installation directory.
How can I use Network.so from my project folder?
All the *.so files are provided by a third-party. I do not have the source and as a result cannot change these files. The function signatures are provided by the documentation.
Searching for Network.so in the strace gives me these results:
readlink("/proc/self/exe", "/home/username/.rvm/rubies/ruby-2."..., 4096) = 44
openat(AT_FDCWD, "/home/username/.rvm/rubies/ruby-2.6.5/bin/Network.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
futex(0x7fcc16666d90, FUTEX_WAKE_PRIVATE, 2147483647) = 0
futex(0x7fcc16b44520, FUTEX_WAKE_PRIVATE, 2147483647) = 0
write(2, "! Failed to load library: ", 26! Failed to load library: ) = 26
write(2, "/home/username/.rvm/rubies/ruby-2."..., 50/home/username/.rvm/rubies/ruby-2.6.5/bin/Network.so) = 50
write(2, ", error: ", 9, error: ) = 9
write(2, "/home/username/.rvm/rubies/ruby-2."..., 109/home/username/.rvm/rubies/ruby-2.6.5/bin/Network.so: cannot open shared object file: No such file or directory) = 109
write(2, "\n", 1) = 1
I've also written a C script which does the same thing which works perfectly fine when the files are dropped into the same directory. So it might be the fault of the library, which I assume checks the location of the current running program, then tries to load the library from that folder. This would explain the behavior when ran as a Ruby script (since it runs as part of the Ruby program), whereas a C binary runs standalone.
For those that want to re-create the (Linux) issue. You can download the necessary files from here. Which gives you the server-linux-x86_64.sh file.
Supported distros are: Suse, Ubuntu, Debian, Red Hat and CentOS but others may also work fine.
You can either run the installer, which should place the files in /opt/eset/RemoteAdministrator/Server. Or, assuming most of you don't want to install the full application you can run the following command:
sed '1,/^# Start of TAR\.GZ file #$/d' server-linux-x86_64.sh | sed '1d' > server-linux-x86_64.tar.gz
Which removes all the installer instructions from the .sh file and only leaves the binary .tar.gz data, writing it to server-linux-x86_64.tar.gz.
Copy the files ServerApi.so, Protobuf.so and Network.so into a directory of your liking. Create a Ruby script (with the question code) in the same directory and run the script.
Because ServerApi.so checks /proc/self/exe for the location of all subsequent files to load, and it is very difficult to modify this target by normal means, it is easier to just modify ServerApi.so itself so that it uses something else besides proc for the source.
If we run strings ServerApi.so, we can verify that the location to check is stored inside a string in ServerApi.so:
strings ServerApi.so | grep 'proc/self/exe'
B/proc/self/exe
So now all we need to do is modify this string to something else that works for us.
The easiest way to modify the string is to replace it with something that is exactly the same length as the original. This way we do not have to worry about changing the end-of-string zero padding or accidentally changing the total size of ServerApi.so.
Here we can see a suitable candidate could be /tmp/scriptexe:
/proc/self/exe
/tmp/scriptexe <- same length
So let's do that:
sed -e 's/proc\/self\/exe/tmp\/scriptexe/' ServerApi.so > ServerApi_Mod.so
Now we can verify the change:
strings ServerApi_Mod.so | grep scriptexe
B/tmp/scriptexe
Next we need to create /tmp/scriptexe to actually point to our Ruby script:
ln -s /the/full/path/to/our/ruby/script.rb /tmp/scriptexe
Then we modify our script:
dlload './ServerApi_Mod.so
Now we can run it as normal:
ruby script.rb
And everything should work.
If we read the strace output we see that the library obtains the current executable location from /proc/self/exe, and then searches subsequent libraries from there.
/proc/self/exe is not easily modifiable, but by using a hard link to a Ruby executable in the current directory we can trick it to point to a new folder.
Problem is making a hard link requires root.
In any case, here is a self-contained solution (note that it will ask for root password the first time you run it, in order to create the hard link).
Put this at the top of your script:
# Obtain path to current executable
exe = File.readlink("/proc/self/exe")
# Check if we are running the hard-liked version
if !exe.match /localruby/
if !File.exist?('localruby')
# Create a hard link to the current Ruby exe using sudo
system("sudo ln #{exe} localruby")
end
puts "Restarting..."
# In order to prevent infinite busy loop in case of some mishap
sleep 1
# Rerun self using the hard-linked Ruby executable.
# This will make /proc/self/exe point to the hard-link, which then
# allows the ESET library to search for .so files in current folder.
exec('./localruby', File.expand_path(__FILE__))
end
require 'fiddle'
require 'fiddle/import'
# ...rest of your script goes here...
A simple solution without any extra Ruby code is to just create the hard link manually, and then always run the script with ./localruby myscript.rb, instead of using the normal ruby myscript.rb.

Terraform lambda source_code_hash update with same code

I have an AWS Lambda deployed successfully with Terraform:
resource "aws_lambda_function" "lambda" {
filename = "dist/subscriber-lambda.zip"
function_name = "test_get-code"
role = <my_role>
handler = "main.handler"
timeout = 14
reserved_concurrent_executions = 50
memory_size = 128
runtime = "python3.6"
tags = <my map of tags>
source_code_hash = "${base64sha256(file("../modules/lambda/lambda-code/main.py"))}"
kms_key_arn = <my_kms_arn>
vpc_config {
subnet_ids = <my_list_of_private_subnets>
security_group_ids = <my_list_of_security_groups>
}
environment {
variables = {
environment = "dev"
}
}
}
Now, when I run terraform plan command it says my lambda resource needs to be updated because the source_code_hash has changed, but I didn't update lambda Python codebase (which is versioned in a folder of the same repo):
~ module.app.module.lambda.aws_lambda_function.lambda
last_modified: "2018-10-05T07:10:35.323+0000" => <computed>
source_code_hash: "jd6U44lfe4124vR0VtyGiz45HFzDHCH7+yTBjvr400s=" => "JJIv/AQoPvpGIg01Ze/YRsteErqR0S6JsqKDNShz1w78"
I suppose it is because it compresses my Python sources each time and the source changes. How can I avoid that if there are no changes in the Python code? Is my hypothesis coherent if I didn't change the Python codebase (I mean, why then the hash changes)?
This is because you are hashing just main.py but uploading dist/subscriber-lambda.zip. Terraform compares the hash to the hash it calculates when the file is uploaded to lambda. Since the hashing is done on two different files, you end up with different hashes. Try running the hash on the exact same file that is being uploaded.
This works for me and also doesn't trigger an update on the Lambda function when the code hasn't changed
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "../dist/go"
output_path = "../dist/lambda_package.zip"
}
resource "aws_lambda_function" "aggregator_func" {
description = "MyFunction"
function_name = "my-func-${local.env}"
filename = data.archive_file.lambda_zip.output_path
runtime = "go1.x"
handler = "main"
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
role = aws_iam_role.function_role.arn
timeout = 120
publish = true
tags = {
environment = local.env
}
}
I'm going to add my answer to contrast to the one #ODYN-Kon provided.
The source code hash field in resource "aws_lambda_function" is not compared to some hash of the zip you upload. Instead, the hash is merely checked against the Terraform saved state from the last time it ran. So, the next time you run Terraform, it computes the hash of the actual python file to see if it has changed. If it has, it assumes that the zip has been changed and the Lambda function resource needs to be run again. The source_code_hash can have any value you want to give it or it can be omitted entirely. You could set it to a constant of some arbitrary string, and then it would never change unless you edit your Terraform configuration.
Now, the problem there is that Terraform assumes you updated the zip file. Assuming you only have one directory or one file in the zip archive, you can use the Terraform data source archive_file to create the zip file. I have a case where I cannot use that because I need a directory and a file (JS world: source + node_modules/). But here is how you can use that:
data "archive_file" "lambdaCode" {
type = "zip"
source_file = "lambda_process_firewall_updates.js"
output_path = "${var.lambda_zip}"
}
Alternativly, you can archive an entire directory, if you replace the "source_file" statement with source_dir = "node_modules"
Once you do this, you can reference the hash code of the zip archive file for insertion into resource "aws_lambda_function" "lambda" { block as "${data.archive_file.lambdaCode.output_base64sha256}" for the field source_hash. Then, anytime the zip changes, the lambda function gets updated. And, the data source archive file knows that anytime the source_file changes it must regenerate the zip.
Now, I haven't drilled down to a root cause in your case, but hopefully given some help to get to a better place. You can check the saved state of Terraform via: tf state list - which lists the items of saved state. You can find the one that matches your lambda function block and then execute tf state show <state-name>. For example, for one I am working on:
tf state show aws_lambda_function.test-lambda-networking gives about 30 lines of output, including:
source_code_hash = 2fKX9v/duluQF0H6O9+iRnID2gokhfpXIXpxyeVBUM0=
You can compare the hash via command line commands. Example on MacOS: sha256sum my-lambda.zip, where sha256sum was installed by brew install coreutils.
As mentioned, the use of archive_file doesn't work when you have multiple elements of the zip which are not isolated to a single directory. I think that probably happens a lot, so I wish the Hashicorp guys would extend archive_file to support multiple. I even went looking at the Go code, but that is a rainy day project. One variation I use is to take the source_code_hash to be "${base64sha256(file("my-lambda.zip"))}". But that still requires me to run tf twice.
As others have said, your zip should be used in your filename and your hash.
I want to mention that you can also get similar recreation issues if you use the wrong hash function in your lambda definitions. For example filesha256(.zip) will also recreate your lambdas every time. You have to use filebase64sha256("file.zip") (terraform 0.11.12+) or base64sha256(file("file.zip")) as mentioned under source_code_hash here

How can I add my own code to JAVA generated classes from proto file?

I'm using protobuf and I'm generating JAVA classes from the following proto file.
syntax = "proto3";
enum Greeting {
NONE = 0;
MR = 1;
MRS = 2;
MISS = 3;
}
message Hello {
Greeting greeting = 1;
string name = 2;
}
message Bye {
string name = 1;
}
option java_multiple_files = true;
Now I need to add some code to the generated files and I found that is possible using a custom plugin (https://developers.google.com/protocol-buffers/docs/reference/java-generated#plugins). I'm trying to generate that plugin in Java, something like this.
public class Test {
PluginProtos.CodeGeneratorResponse.getDefaultInstance();
/* Code to get generated files from java_out and use the insertion points */
codeGeneratorResponse.writeTo(System.out);
}
And then I run
protoc --java_out=./classes --plugin=protoc-gen-demo=my-plugin --demo_out=. example.proto
The problem is that on my Test.java main method I don't know how to get access to the files created by the option --java_out so that I can use their insertion points. Currently the CodeGeneratorResponse for the default instance is empty (no files).
Does anybody know how can I get the CodeGeneratorResponse from the --java_out so that I can add more code to the generated classes?
Thanks in advance.
I recently struggled with this as well and wasn't able to find a good answer. I finally figured it out after staring at the comments within the CodeGeneratorResponse message for a while.
What threw me off at first was that I was thinking of plugins as a pipeline, where the output from one feeds into the next. However, each plugin gets the exact same input (the parsed .proto files expressed via CodeGeneratorRequest messages), and all the generated code from the plugins (including the built-in ones) gets combined into the output file. However, plugins may modify the output from the previous plugins, which is what insertion points are designed for.
Specifically to your question, you would add a file to the response with the name field getting set to the name of the generated Java file, the insertion_point field getting set to the name of the insertion point at which you want to add code, and the content field getting set to the code you want inserted at that point.
I found this article helpful in creating a simple plugin (in this case in python). As a simple test, I modified the generate_code function from that article to look like this:
def generate_code(request, response):
for proto_file in request.proto_file:
f = response.file.add()
f.name = "Test.java"
f.insertion_point = "outer_class_scope"
f.content = "// Inserting a comment as a test"
Then I ran protoc with the plugin:
$ cat test.proto
syntax = "proto3";
message MyMsg {
int32 num = 1;
}
$ protoc --plugin=protoc-gen-sample=sample_proto_gen.py --java_out=. --sample_out=. test.proto
$ tail -n3 Test.java
// Inserting a comment as a test
// ##protoc_insertion_point(outer_class_scope)
}
Your plugin just needs to be some executable which reads a CodeGeneratorRequest message from stdin and writes a CodeGeneratorResponse message to stdout, so could certainly be written in Java instead. I just chose python as I'm generally more comfortable with it and found this simple example.
As a reference, here's a plugin I wrote for generating code based on custom protobuf options.
I have made a custom python plugin.
To run my plugin i use the command below:
protoc --plugin=protoc-gen-custom=my_plugin_executable_file --custom_out=./build test.proto
So i think that, you have to generate an executable file from your .java file and use it in your command.

How do you open a zip file using watir-webdriver?

My test suite has a cucumber front end with a ruby backend, running the latest version of watir-webdriver and its dependencies atop the latest version of OSX. My cucumber environment is setup to execute in Firefox.
The export feature of our app creates a zip file but to test the import feature, I need the contents of the zip file.
My actual test needs to unpack that zip file and select the individual files in it for use in testing the import feature of our web application.
Can anyone point me to a reference that can help me figure out how to write that?
Based off my experience, you download this file the same way that a normal user might. So first off, you just click the download button or whatever and then can access the file wherever it is and check out its contents.
Assuming the downloads just go to your Downloads folder by default, there is some simple code you can use to select the most recently downloaded item:
fn = Dir.glob("~/Downloads/*.zip").max { |a,b| File.ctime(a) <=> File.ctime(b)}
Then just use the unzip shell command to unzip the file. No reason to add another gem into the mix when you can just use generic shell commands.
`unzip #{fn}`
Then, you'd use Dir.glob again to get the filenames of everything inside the unzipped files folder. Assuming the file was named "thing.zip", you do this:
files = Dir.glob("~/Downloads/thing/*")
If you want to files to be downloaded directly to your project folder, you can try this. This also prevents the popup from asking you if you really want to save the file which is handy. I think this still works but haven't used it in some time. The above stuff works for sure though.
profile = Selenium::WebDriver::Firefox::Profile.new
download_dir = Dir.pwd + "/test_downloads"
profile['browser.download.dir'] = download_dir
profile['browser.helperApps.neverAsk.saveToDisk'] = "application/zip"
b = Watir::Browser.new. :firefox, :profile => profile
I ended up adding the rubyzip gem at https://github.com/rubyzip/rubyzip
the solution is on that link but i modified mine a little bit. I added the following to my common.rb file. see below:
require 'Zip'
def unpack_zip
test_home='/Users/yournamegoeshere/SRC/watir_testing/project files'
sleep(5) #<--manually making time to hit the save download dialog
zip_file_paths = []
Find.find(test_home) do |path|
zip_file_paths << path if path =~ /.*\.zip$/
end
file_name=zip_file_paths[0]
Zip::File.open(file_name) do |zip_file|
# Handle entries one by one
zip_file.each do |entry|
# Extract to file/directory/symlink
puts "Extracting #{entry.name}"
entry.extract(test_home + "/" + entry.name)
# Read into memory
content = entry.get_input_stream.read
end
# Find specific entry
entry = zip_file.glob('*.csv').first
puts entry.get_input_stream.read
end
end
This solution works great!

Resources