Quarkus native run exec user process caused: exec format error - aws-lambda

I started studying Quarkus.
https://quarkus.io/guides/amazon-lambda
There was a problem while I was handling the tutorial on the site above.
If make a native work locally using AWS SAM or AWS Lambda the same error occurs during invoke.
(If it is not native, it works normally.)
MacBook M1 is in use, both graalvm and Java are arm64.
**
ahahah#bcd0745cd453 ahahaha % sh target/manage.sh native invoke
**
Invoking function
++ aws lambda invoke response.txt --cli-binary-format raw-in-base64-out --function-name HoiriasNative --payload file://payload.json --log-type Tail --query LogResult --output text
++ base64 --decode
START RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543 Version: $LATEST
RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543
Error: fork/exec /var/task/bootstrap: exec format error
Runtime.InvalidEntrypoint
END RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543
REPORT RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543 Duration: 2.94 ms Billed Duration: 3 ms Memory Size: 256 MB Max Memory Used: 3 MB
{"errorMessage":"RequestId: a97a6513-afbf-4925-a0f0-1acab8dec543
Error: fork/exec /var/task/bootstrap: exec format error","errorType":"Runtime.InvalidEntrypoint"}%
It's my bash_profile.
GRAALVM_HOME=/Library/JAVA/JavaVirtualMachines/graalvm-ce-java17-22.3.0/Contents/Home
JAVA_HOME=/Library/JAVA/JavaVirtualMachines/graalvm-ce-java17-22.3.0/Contents/Home
PATH=$PATH:$JAVA_HOME
export JAVA_HOME
export PATH=${GRAALVM_HOME}/bin:$PATH
When I uploaded the native to Lambda, it was distributed as x86 so could this be a problem?
What should I do?
please Help me.
I tried to build it in another way, and the image was created normally.
quarkus build --native --no-tests -Dquarkus.native.container-build=true

I Tried this yesterday as well.
It's currently not possible to build native images on Mac M1 (arm64) to run on Amazon Lambda. Probably with some virtual machine it might be possible.
Even on linux x86_64 you should build with:
quarkus build --native -Dquarkus.native.container-build=true to maximize compatibility.

Related

AWS Lambda Chalice Layers Segmentation Fault

I am deploying a Python 3.7 Lambda function via Chalice. Because the code with its environment requirements, is larger than 50 MB limit, I am using the "automatic_layer" feature of Chalice to generate the layer with the requirements, which is awswrangler.
Because the generated layer is > 50 MB, I am uploading the generated managed-layer-...-python3.7.zip manually to s3 and create a Lambda layer. Then I re-deploy with chalice, removing the automatic_layer option and setting the layers to the generated ARN of the layer I manually created.
The function deployed this way worked OK for a couple of times, then started failing occasionally with "Segmentation Fault". The error rate increased shortly and now it is failing 100%.
Traceback:
> OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
> START RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Version: $LATEST
> OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
> END RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc
> REPORT RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Duration: 7165.04 ms Billed Duration: 7166 ms Memory Size: 128 MB Max Memory Used: 41 MB
> RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Error: Runtime exited with error: signal: segmentation fault (core dumped)
> Runtime.ExitError
As awswrangler itself requires boto3 & botocore, and they are already in the Lambda environment, I suspected that there might be a conflict of different versions of boto. I tried the same flow by explicitly including boto3 and botocore in the requirements but I am still receiving the same segmentation fault error.
Any help is much appreciated.
You could use AWS X-Ray to get more information on the problem : https://docs.aws.amazon.com/lambda/latest/dg/python-tracing.html
Moreover you might analyze the core dump generated executing your lambda function on a bash shell:
ulimit -c unlimited
cd /tmp
ececute your python ...
You should find a file named /tmp/core..... that you should analyze with gdb after download. The command "man core" could help you.

Add shared library to a AWS Lambda Go binary

Context
I'm developing an AWS Lambda function using Go and one of my dependencies is gopkg.in/h2non/bimg.v1 which has a dependency: libvips 7.42+ or 8+ (8.4+ recommended).
Problem
The problem is that in my local machine the lambda handler is working, but when I deploy it this error occures:
START RequestId: b4becbd1-3fca-4aed-9574-8df0e3d13c9e Version: $LATEST
/var/task/main: error while loading shared libraries: libvips.so.42: cannot open shared object file: No such file or directory
END RequestId: b4becbd1-3fca-4aed-9574-8df0e3d13c9e
REPORT RequestId: b4becbd1-3fca-4aed-9574-8df0e3d13c9e Duration: 42.36 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 12 MB
RequestId: b4becbd1-3fca-4aed-9574-8df0e3d13c9e Process exited before completing request
My build command is:
GOOS=linux GOARCH=amd64 go build -o main main.go
What I tried
I tried to build it with c-shared option enabled:
GOOS=linux GOARCH=amd64 go build -buildmode=c-shared -o main main.go
But got an error too, a different one tho;
START RequestId: 9b90df21-1025-463b-89b1-1a4ee31f295c Version: $LATEST
fork/exec /var/task/main: permission denied: PathError
null
END RequestId: 9b90df21-1025-463b-89b1-1a4ee31f295c
REPORT RequestId: 9b90df21-1025-463b-89b1-1a4ee31f295c Duration: 0.77 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 30 MB Init Duration: 1.84 ms
I have two options (?):
Rewrite with a fully Go library
Generate a library with the libvips library packed into the Go binary.
There is another option - put all .so file into zip archive with your binary and upload zip as lambda. So, your achive content should looks like that
╰─ unzip -l function.zip
Archive: function.zip
Length Date Time Name
--------- ---------- ----- ----
6764336 10-08-2020 01:01 imgconvert
284008 06-19-2020 09:16 libexif.so.12
276072 08-22-2019 08:14 libjpeg.so.62
155824 12-10-2015 02:17 libpng12.so.0
468376 10-01-2019 03:37 libtiff.so.5
12261600 10-08-2020 00:48 libvips.so.42
3579016 10-08-2020 00:45 libwebp.so.7
85328 10-08-2020 00:45 libwebpdemux.so.2
205696 10-08-2020 00:45 libwebpmux.so.3

Unreal build using UnrealBuildTool with the command line

Yesterday things were running fine. Today? Not so much
In Unreal on a Mac using blueprints only, I get the error:
LogPlayLevel: Error: ERROR: Stage Failed. Missing receipt Check that this target has been built.
LogPlayLevel: AutomationTool exiting with ExitCode=103 (Error_MissingExecutable)
LogPlayLevel: Completed Launch On Stage: Build Task, Time: 1.776950
LogPlayLevel: Completed Launch On Stage: Deploy Task, Time: 0.000039
LogPlayLevel: Error: RunUAT ERROR: AutomationTool was unable to run successfully.
PackagingResults: Error: Launch failed! Missing UE4Game binary.
You may have to build the UE4 project with your IDE. Alternatively, build using UnrealBuildTool with the commandline:
UE4Game [Platform] [Configuration]
So I try to build from command line:
cd "/Users/Shared/Epic Games/UE_4.23/Engine/Binaries/Mac"
open UE4Editor.app --args "/Users/me/Documents/Unreal Projects/Some Folder/MyProject.uproject" -run=cook -targetplatform=Android
This switches focus from the Terminal to UREditor but nothing happens. What am I missing / got wrong?
Note to future self:
I was able to get around the hold-up above, by simply creating a packaged Android build - which built beautifully and successfully.
That said I'd still like to know the command line to build an Android version.

Deploy does not generate all packages

I'm using a Google library called DialogFlow. And in the last 6 or 7 days all the lambda functions that import this library, started to give initialization error.
I noticed that it does pretty much the same time the serverless framework has been upgraded from version 1.31.0 to 1.32.0. In my serverless.yml file I put: frameworkVersion: ">=1.0.0 <2.0.0"
If I compile a simple code like this:
import dialogflow
def hi(event, context):
return {
"statusCode": 200,
"body": "ahhh hiiii"
}
The error generated in lambda is as follows:
START RequestId: 907fe23d-c2b1-11e8-b745-27859211eefc Version: $LATEST
module initialization error: The 'google-api-core' distribution was
not found and is required by the application
END RequestId: 907fe23d-c2b1-11e8-b745-27859211eefc REPORT RequestId:
907fe23d-c2b1-11e8-b745-27859211eefc Duration: 47.02 ms Billed
Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 32 MB module
initialization error The 'google-api-core' distribution was not found
and is required by the application
The problem is libraries that use low-level language (usually in C). And when serverless generates packets, those languages ​​are not sent.
The solution: Enable docker packaging, through the serverless-python-requirements plugin.
custom:
pythonRequirements:
dockerizePip: true

tesseract-ocr works on EC2, not lambda

My goal is to run tesseract-ocr in AWS Lambda.
I've built an EC2 instance that attempts to mirror the Lambda environment. Executing tesseract without parameters succeeds in both environments. However, any attempt at substantive image processing, e.g. this code:
tess = child_process.exec('tesseract input.tif output -l eng -psm 1 hocr', function(error, stdout, stderr) {
...
runs successfully on my EC2 box, but fails in Lambda with this error:
Error: Command failed: Tesseract Open Source OCR Engine v3.04.00 with Leptonica
Error during processing.
at ChildProcess.exithandler (child_process.js:648:15)
at ChildProcess.emit (events.js:98:17)
at maybeClose (child_process.js:756:16)
at Process.ChildProcess._handle.onexit (child_process.js:823:5)
Error code: 1
Signal received: null
Lambda is assuming an IAM role with administrative privileges ({ "Effect": "Allow", "Action": "", "Resource": "" })
The "Error during processing" error is emitted by tesseract as a top level catch-all. I'm going to instrument tesseract and try to narrow the problem further.
How I got here:
My EC2 machine is a t2.micro running Amazon Linux in us-east-1 (amzn-ami-hvm-2014.09.2.x86_64-ebs (ami-146e2a7c)).
I installed node 0.10.33 and aws-sdk#2.0.23, which match the Lambda versions.
I compiled tesseract and leptonica from source. Added an rpath and have run ldd to confirm that all dependencies are found
tesseract binaries and liblept.so are all in my root directory (/var/task)
I'd like to know what's going wrong - or how to diagnose it.
Thank you,
Dave
Short answer: output must go in the /tmp dir, e.g.
tesseract input.tif /tmp/output -l eng -psm 1 hocr
Slightly longer answer: tesseract calls fopen wb under the hood, and apparently that is forbidden in /var/task.
I would have noticed this a few days ago, but Lambda has not been propagating my deployment packages. So, the one time I tried to put output in the /tmp dir, there was no effect - but that was b/c Lambda was executing a stale version of my function. Solution is to always delete-function before calling update-function.

Resources