Context
I'm developing an AWS Lambda function using Go and one of my dependencies is gopkg.in/h2non/bimg.v1 which has a dependency: libvips 7.42+ or 8+ (8.4+ recommended).
Problem
The problem is that in my local machine the lambda handler is working, but when I deploy it this error occures:
START RequestId: b4becbd1-3fca-4aed-9574-8df0e3d13c9e Version: $LATEST
/var/task/main: error while loading shared libraries: libvips.so.42: cannot open shared object file: No such file or directory
END RequestId: b4becbd1-3fca-4aed-9574-8df0e3d13c9e
REPORT RequestId: b4becbd1-3fca-4aed-9574-8df0e3d13c9e Duration: 42.36 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 12 MB
RequestId: b4becbd1-3fca-4aed-9574-8df0e3d13c9e Process exited before completing request
My build command is:
GOOS=linux GOARCH=amd64 go build -o main main.go
What I tried
I tried to build it with c-shared option enabled:
GOOS=linux GOARCH=amd64 go build -buildmode=c-shared -o main main.go
But got an error too, a different one tho;
START RequestId: 9b90df21-1025-463b-89b1-1a4ee31f295c Version: $LATEST
fork/exec /var/task/main: permission denied: PathError
null
END RequestId: 9b90df21-1025-463b-89b1-1a4ee31f295c
REPORT RequestId: 9b90df21-1025-463b-89b1-1a4ee31f295c Duration: 0.77 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 30 MB Init Duration: 1.84 ms
I have two options (?):
Rewrite with a fully Go library
Generate a library with the libvips library packed into the Go binary.
There is another option - put all .so file into zip archive with your binary and upload zip as lambda. So, your achive content should looks like that
╰─ unzip -l function.zip
Archive: function.zip
Length Date Time Name
--------- ---------- ----- ----
6764336 10-08-2020 01:01 imgconvert
284008 06-19-2020 09:16 libexif.so.12
276072 08-22-2019 08:14 libjpeg.so.62
155824 12-10-2015 02:17 libpng12.so.0
468376 10-01-2019 03:37 libtiff.so.5
12261600 10-08-2020 00:48 libvips.so.42
3579016 10-08-2020 00:45 libwebp.so.7
85328 10-08-2020 00:45 libwebpdemux.so.2
205696 10-08-2020 00:45 libwebpmux.so.3
Related
I started studying Quarkus.
https://quarkus.io/guides/amazon-lambda
There was a problem while I was handling the tutorial on the site above.
If make a native work locally using AWS SAM or AWS Lambda the same error occurs during invoke.
(If it is not native, it works normally.)
MacBook M1 is in use, both graalvm and Java are arm64.
**
ahahah#bcd0745cd453 ahahaha % sh target/manage.sh native invoke
**
Invoking function
++ aws lambda invoke response.txt --cli-binary-format raw-in-base64-out --function-name HoiriasNative --payload file://payload.json --log-type Tail --query LogResult --output text
++ base64 --decode
START RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543 Version: $LATEST
RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543
Error: fork/exec /var/task/bootstrap: exec format error
Runtime.InvalidEntrypoint
END RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543
REPORT RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543 Duration: 2.94 ms Billed Duration: 3 ms Memory Size: 256 MB Max Memory Used: 3 MB
{"errorMessage":"RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543
Error: fork/exec /var/task/bootstrap: exec format error","errorType":"Runtime.InvalidEntrypoint"}%
It's my bash_profile.
GRAALVM_HOME=/Library/JAVA/JavaVirtualMachines/graalvm-ce-java17-22.3.0/Contents/Home
JAVA_HOME=/Library/JAVA/JavaVirtualMachines/graalvm-ce-java17-22.3.0/Contents/Home
PATH=$PATH:$JAVA_HOME
export JAVA_HOME
export PATH=${GRAALVM_HOME}/bin:$PATH
When I uploaded the native to Lambda, it was distributed as x86 so could this be a problem?
What should I do?
please Help me.
I tried to build it in another way, and the image was created normally.
quarkus build --native --no-tests -Dquarkus.native.container-build=true
I Tried this yesterday as well.
It's currently not possible to build native images on Mac M1 (arm64) to run on Amazon Lambda. Probably with some virtual machine it might be possible.
Even on linux x86_64 you should build with:
quarkus build --native -Dquarkus.native.container-build=true to maximize compatibility.
I have a (1) Dockerfile and a (2) C++ project with Bazel
I want to create a docker image that has Bazel targets pre-built within the image, so as to when I power up new containers the Bazel targets are pre-built and I just do Bazel run //hello:hello_world from the container bash.
Dockerfile
# Copy my project with Bazel files to a Docker image, and the
...
RUN bazel --output_user_root=/tmp/hello_project/bazel build //...
...
Within the execution of the Dockerfile, I get the following output which is expected
Loading:
Loading: 0 packages loaded
Analyzing: 2 targets (1 packages loaded, 0 targets configured)
Analyzing: 2 targets (11 packages loaded, 18 targets configured)
INFO: Analyzed 2 targets (15 packages loaded, 60 targets configured).
INFO: Found 2 targets...
[0 / 11] [Prepa] BazelWorkspaceStatusAction stable-status.txt
INFO: Elapsed time: 6.333s, Critical Path: 0.37s
INFO: 11 processes: 6 internal, 5 processwrapper-sandbox.
INFO: Build completed successfully, 11 total actions
INFO: Build completed successfully, 11 total actions
When I run a new container form the Docker Image built previously and then on the container I run
bazel run //hello:hello_world
Instead of using existing pre-built targets it re-builds the targets, which is not required.
Result I expect (not get): Everything is pre-built and just needs to run
INFO: Analyzed target //hello:hello_world (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //hello:hello_world up-to-date:
bazel-bin/hello/hello_world
INFO: Elapsed time: 0.163s, Critical Path: 0.01s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
Hello World!
Result I get: I re-builds the binaries
[root#4a6bdb57fd79 test-rc]# bazel run //hello:hello_world
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Analyzed target //hello:hello_world (15 packages loaded, 60 targets configured).
INFO: Found 1 target...
Target //hello:hello_world up-to-date:
bazel-bin/hello/hello_world
INFO: Elapsed time: 6.255s, Critical Path: 0.38s
INFO: 7 processes: 4 internal, 3 processwrapper-sandbox.
INFO: Build completed successfully, 7 total actions
INFO: Build completed successfully, 7 total actions
Hello World!
How can I make sure, the the bazel run uses same pre-built targets and not build them again before run.
This sounds like non-determinism - in the second execution of Bazel, something is different causing a cache miss.
Some things that can cause it:
different options passed to bazel build vs. bazel test - check your .bazelrc file
actions that include VCS info or a timestamp - make sure you have rules_docker 0.22.0 or later to pick up https://github.com/bazelbuild/rules_docker/commit/2b35b2dd56f0be6cc6b8df957332a31435f6b3ce
One step to diagnose is to use the --explain=log.txt and --verbose_explanations flags to Bazel, then the log file will say why it was rebuilt. However it doesn't have much detail, just something like "a source file has changed".
If you want the power tool for this, there is a way to find out exactly why Bazel didn't get a cache hit - read https://georgi.hristozov.net/til/2020/04/20/compare-bazel-execlogs-to-find-non-deterministic-parts-of-the-build
I'm getting started with bazel and trying to generate the protobuf code for golang for an RPC service.
When I try to build it I get the following error:
bazel-out/k8-fastbuild/bin/examples/grpc/protos/helloworld_go_proto_/examples/grpc/protos/helloworld.pb.go:229:7: undefined: grpc.ClientConnInterface
bazel-out/k8-fastbuild/bin/examples/grpc/protos/helloworld_go_proto_/examples/grpc/protos/helloworld.pb.go:233:11: undefined: grpc.SupportPackageIsVersion6
bazel-out/k8-fastbuild/bin/examples/grpc/protos/helloworld_go_proto_/examples/grpc/protos/helloworld.pb.go:243:5: undefined: grpc.ClientConnInterface
bazel-out/k8-fastbuild/bin/examples/grpc/protos/helloworld_go_proto_/examples/grpc/protos/helloworld.pb.go:246:26: undefined: grpc.ClientConnInterface
Full build log: https://app.buildbuddy.io/invocation/c3773978-22dd-44c8-b977-13967a1953b7
Here is the code: https://github.com/juanique/example-go-grpc. I'm trying to include the least possible amount of code to make that target work.
Since the BUILD file was generated by gazelle I suspect the issue is in the WORKSPACE file: https://raw.githubusercontent.com/juanique/example-go-grpc/main/WORKSPACE. I'm just doing what I found in https://github.com/bazelbuild/rules_go
UPDATE: it looks surely a version issue: https://github.com/grpc/grpc-go#compiling-error-undefined-grpcsupportpackageisversion
Not a bazel user, but after hours' test, I found your rpc version should be higher:
go_repository(
name = "org_golang_google_grpc",
- build_file_proto_mode = "disable",
importpath = "google.golang.org/grpc",
- sum = "h1:J0UbZOIrCAl+fpTOf8YLs4dJo8L/owV4LYVtAXQoPkw=",
- version = "v1.22.0",
+ sum = "h1:f+PlOh7QV4iIJkPrx5NQ7qaNGFQ3OTse67yaDHfju4E=",
+ version = "v1.41.0",
)
Then you can build it:
~/code/example-go-grpc (main*) [09:45:55]
p1gd0g$ bazel build //examples/grpc/protos:helloworld_go_proto
INFO: Analyzed target //examples/grpc/protos:helloworld_go_proto (49 packages loaded, 306 targets configured).
INFO: Found 1 target...
Target //examples/grpc/protos:helloworld_go_proto up-to-date:
bazel-bin/examples/grpc/protos/helloworld_go_proto.a
INFO: Elapsed time: 41.180s, Critical Path: 1.20s
INFO: 29 processes: 1 internal, 28 linux-sandbox.
INFO: Build completed successfully, 29 total actions
And I recommend using gazelle with go.mod to generate WORKSPACE. A demo: https://github.com/p1gd0g/helloworld
I am deploying a Python 3.7 Lambda function via Chalice. Because the code with its environment requirements, is larger than 50 MB limit, I am using the "automatic_layer" feature of Chalice to generate the layer with the requirements, which is awswrangler.
Because the generated layer is > 50 MB, I am uploading the generated managed-layer-...-python3.7.zip manually to s3 and create a Lambda layer. Then I re-deploy with chalice, removing the automatic_layer option and setting the layers to the generated ARN of the layer I manually created.
The function deployed this way worked OK for a couple of times, then started failing occasionally with "Segmentation Fault". The error rate increased shortly and now it is failing 100%.
Traceback:
> OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
> START RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Version: $LATEST
> OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
> END RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc
> REPORT RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Duration: 7165.04 ms Billed Duration: 7166 ms Memory Size: 128 MB Max Memory Used: 41 MB
> RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Error: Runtime exited with error: signal: segmentation fault (core dumped)
> Runtime.ExitError
As awswrangler itself requires boto3 & botocore, and they are already in the Lambda environment, I suspected that there might be a conflict of different versions of boto. I tried the same flow by explicitly including boto3 and botocore in the requirements but I am still receiving the same segmentation fault error.
Any help is much appreciated.
You could use AWS X-Ray to get more information on the problem : https://docs.aws.amazon.com/lambda/latest/dg/python-tracing.html
Moreover you might analyze the core dump generated executing your lambda function on a bash shell:
ulimit -c unlimited
cd /tmp
ececute your python ...
You should find a file named /tmp/core..... that you should analyze with gdb after download. The command "man core" could help you.
I'm using a Google library called DialogFlow. And in the last 6 or 7 days all the lambda functions that import this library, started to give initialization error.
I noticed that it does pretty much the same time the serverless framework has been upgraded from version 1.31.0 to 1.32.0. In my serverless.yml file I put: frameworkVersion: ">=1.0.0 <2.0.0"
If I compile a simple code like this:
import dialogflow
def hi(event, context):
return {
"statusCode": 200,
"body": "ahhh hiiii"
}
The error generated in lambda is as follows:
START RequestId: 907fe23d-c2b1-11e8-b745-27859211eefc Version: $LATEST
module initialization error: The 'google-api-core' distribution was
not found and is required by the application
END RequestId: 907fe23d-c2b1-11e8-b745-27859211eefc REPORT RequestId:
907fe23d-c2b1-11e8-b745-27859211eefc Duration: 47.02 ms Billed
Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 32 MB module
initialization error The 'google-api-core' distribution was not found
and is required by the application
The problem is libraries that use low-level language (usually in C). And when serverless generates packets, those languages are not sent.
The solution: Enable docker packaging, through the serverless-python-requirements plugin.
custom:
pythonRequirements:
dockerizePip: true