Docker Image on AWS Lambda is executing the RUN/Entrypoint Twice on Testing - bash

I am currently learning and playing around with Lambda and Docker. I have currently the Docker File:
FROM amazonlinux:2.0.20191016.0
RUN yum install jq -y
COPY . ./
CMD chmod 755 ./random.sh ; chmod 755 ./discord.sh
ENTRYPOINT "./random.sh"
Pretty Basic and the File random.sh is sending via discord.sh a message to my Discord Server.
When i am doing a Test Run its seems like he is calling the ./random.sh twice
2022-04-05T13:24:23.537+02:00 9
2022-04-05T13:24:23.537+02:00 https://www.oetker.at/dr-oetker-cms/oetker.de/image/image-thumb__47425__auto_23393e4cf279157878cad04620baa711/Paula-am-kochen_02.png
2022-04-05T13:24:23.971+02:00 START RequestId: c3dca9f8-1a3f-415b-8a0c-b41cd441fb84 Version: $LATEST
2022-04-05T13:24:24.023+02:00 3
2022-04-05T13:24:24.023+02:00 https://www.sueddeutsche.de/image/sz.1.937584/640x360?v=1528418182
2022-04-05T13:24:24.726+02:00 END RequestId: c3dca9f8-1a3f-415b-8a0c-b41cd441fb84
2022-04-05T13:24:24.726+02:00 REPORT RequestId: c3dca9f8-1a3f-415b-8a0c-b41cd441fb84 Duration: 752.69 ms Billed Duration: 753 ms Memory Size: 128 MB Max Memory Used: 6 MB
2022-04-05T13:24:24.726+02:00 RequestId: c3dca9f8-1a3f-415b-8a0c-b41cd441fb84 Error: Runtime exited without providing a reason Runtime.ExitError
taht is the log of the execution of the File and it seems like he is running it twice the code of my random.sh as you can see with the numbers and the Link which get logged.
And a other Problem ist how do i fix the Runtime error. because my Random.sh is executing always with a exit 0 which should return a succesfull.
I hope you can help me out, i could fix it simply by writing in python and use simple lamdba function but i wanted to try this out and normaly the ENTRYPOINT should be exceuted once. I made Asynchronous invocation aswell to 0 so it dont retry at failed

The lambda deployed as Image isn't expected to work as containerized App (such as Amazon ECS)
You need to install lambda runtime interface client and integrate your code with it, check the following guide (Creating images from alternative base images)

Related

Failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB) in quic golang appengine

I am using a google cloud app engine to deploy my quic-go server. But getting the error:
failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB).
I am using app.yaml file to build a docker file which is as follows:
FROM golang:1.18.3
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN apt-get update && apt-get install -y ffmpeg
CMD sudo --sysctl net.core.rmem_default=15000000
CMD sudo --sysctl net.core.rmem_max=15000000
RUN go build -x server.go
ENV GCS_BUCKETNAME xyz
ENV AI_CLIENT_SSL_CERT /path to cert
ENV AI_CLIENT_SSL_KEY /path to key
ENV GCP_BUCKET_SERVICE_ACCOUNT_CREDS /path to google cloud service account credential
CMD [ "./server" ]
This is my app.yaml
runtime: custom
env: flex
env_variables:
GCS_BUCKETNAME : "xyz"
AI_CLIENT_SSL_CERT : "./path to cert"
AI_CLIENT_SSL_KEY : "./path to key"
GCP_BUCKET_SERVICE_ACCOUNT_CREDS : "./path to google cloud credential.json file"
service: streaming-app
automatic_scaling:
min_num_instances: 1
max_num_instances: 20
cpu_utilization:
target_utilization: 0.85
target_concurrent_requests: 100
Any sort of help will be appreciated.
Since sysctl is an OS-level config that doesn't fit in line with App Engine's principle use case. App Engine does not currently have any way of configuring the underlying sysctl config files. I believe that Google Kubernetes engine may be a better use case for running that server, as App Engine environments have a limited set of configurable settings.
can you tell me the scenarios when this file is not present in the kernel?
I’m not sure about the scenarios as I have least experience with kernel. For me it seems a different question rather than original post. you can raise a new StackOverflow question regarding this.

Storing Artifacts From a Failed Build

I am running some screen diffing tests in one of my Cloud Build steps. The tests produce png files that I would like to view after the build, but it appears to upload artifacts on successful builds.
If my test fail, the process exits with a non-zero code, which results in this error:
ERROR: build step 0 "gcr.io/k8s-skaffold/skaffold" failed: step exited with non-zero status: 1
Which further results in another error
ERROR: (gcloud.builds.submit) build a22d1ab5-c996-49fe-a782-a74481ad5c2a completed with status "FAILURE"
And no artifacts get uploaded.
I added || true after my tests, so it exits successfully, and the artifacts get uploaded.
I want to:
A) Confirm that this behavior is expected
B) Know if there is a way to upload artifacts even if a step fails
Edit:
Here is my cloudbuild.yaml
options:
machineType: 'N1_HIGHCPU_32'
timeout: 3000s
steps:
- name: 'gcr.io/k8s-skaffold/skaffold'
env:
- 'CLOUD_BUILD=1'
entrypoint: bash
args:
- -x # print commands as they are being executed
- -c # run the following command...
- build/test/smoke/smoke-test.sh
artifacts:
objects:
location: 'gs://cloudbuild-artifacts/$BUILD_ID'
paths: [
'/workspace/build/test/cypress/screenshots/*.png'
]
Google Cloud Build doesn't allow us to upload artifacts (or run some steps ) if a build step fails. This is the expected behavior.
There is an already feature request created in Public Issue Tracker to allow us to run some steps even though the build has finished or failed. Please feel free to star it to get all the related updates on this issue.
A workaround per now is as you mentioned using || true after the tests or use || exit 0 as mentioned in this Github issue.

How to write a policy in .yaml for a python lambda to read from S3 using the aws sam cli

I am trying to deploy a python lambda to aws. This lambda just reads files from s3 buckets when given a bucket name and file path. It works correctly on the local machine if I run the following command:
sam build && sam local invoke --event testfile.json GetFileFromBucketFunction
The data from the file is printed to the console. Next, if I run the following command the lambda is packaged and send to my-bucket.
sam build && sam package --s3-bucket my-bucket --template-file .aws-sam\build\template.yaml --output-template-file packaged.yaml
The next step is to deploy in prod so I try the following command:
sam deploy --template-file packaged.yaml --stack-name getfilefrombucket --capabilities CAPABILITY_IAM --region my-region
The lambda can now be seen in the lambda console, I can run it but no contents are returned, if I change the service role manually to one which allows s3 get/put then the lambda works. However this undermines the whole point of using the aws sam cli.
I think I need to add a policy to the template.yaml file. This link here seems to say that I should add a policy such as one shown here. So, I added:
Policies: S3CrudPolicy
Under 'Resources:GetFileFromBucketFunction:Properties:', I then rebuild the app and re-deploy and the deployment fails with the following errors in cloudformation:
1 validation error detected: Value 'S3CrudPolicy' at 'policyArn' failed to satisfy constraint: Member must have length greater than or equal to 20 (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: unique number
and
The following resource(s) failed to create: [GetFileFromBucketFunctionRole]. . Rollback requested by user.
I delete the stack to start again. My thoughts were that 'S3CrudPolicy' is not an off the shelf policy that I can just use but something I would have to define myself in the template.yaml file?
I'm not sure how to do this and the docs don't seem to show any very simple use case examples (from what I can see), if anyone knows how to do this could you post a solution?
I tried the following:
S3CrudPolicy:
PolicyDocument:
-
Action: "s3:GetObject"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}
Principal: "*"
But it failed with the following error:
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Invalid template property or properties [S3CrudPolicy]
If anyone can help write a simple policy to read/write from s3 than that would be amazing? I'll need to write another one so get lambdas to invoke others lambdas as well so a solution here (I imagine something similar?) would be great? - Or a decent, easy to use guide of how to write these policy statements?
Many thanks for your help!
Found it!! In case anyone else struggles with this you need to add the following few lines to Resources:YourFunction:Properties in the template.yaml file:
Policies:
- S3CrudPolicy:
BucketName: "*"
The "*" will allow your lambda to talk to any bucket, you could switch for something specific if required. If you leave out 'BucketName' then it doesn't work and returns an error in CloudFormation syaing that S3CrudPolicy is invalid.

Issue installing openwhisk with incubator-openwhisk-devtools

I have a blocking issue installing openwhisk with docker
I typed make quick-start right after a git pull of the project incubator-openwhisk-devtools. My OS is Fedora 29, docker version is 18.09.0, docker-compose version is 1.22.0. JDk 8 Oracle.
I get the following error:
[...]
adding the function to whisk ...
ok: created action hello
invoking the function ...
error: Unable to invoke action 'hello': The server is currently unavailable (because it is overloaded or down for maintenance). (code ciOZDS8VySDyVuETF14n8QqB9wifUboT)
[...]
[ERROR] [#tid_sid_unknown] [Invoker] failed to ping the controller: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for health-0: 30069 ms has passed since batch creation plus linger time
[ERROR] [#tid_sid_unknown] [KafkaProducerConnector] sending message on topic 'health' failed: Expiring 1 record(s) for health-0: 30009 ms has passed since batch creation plus linger time
Please note that controller-local-logs.log is never created.
If I issue a touch controller-local-logs.log in the right directory the log file is always empty after I try to issue make quick-start again.
http://localhost:8888/ping gives me the right answer: pong.
http://localhost:9222 is not reacheable.
Where am I wrong?
Thank you in advance

Jekyll Error - "serve" only works once

Started working on an update to my website. It was working fine the other day but now I get an error.
I generally type on CMD (i'm on Windows 10) "bundle exec Jekyll serve --watch" and the server goes. I can edit and save and its all reflected in browser upon refresh.
Now I can do this, but if I make one change to any file it works. Do another change and I get an error. I have to terminate and type again.
Below is the error:
D:\Tristen Grant\Documents\GitHub\portfolio>bundle exec jekyll serve --watch
DL is deprecated, please use Fiddle
Configuration file: D:/Tristen Grant/Documents/GitHub/portfolio/_config.yml
Source: D:/Tristen Grant/Documents/GitHub/portfolio
Destination: D:/Tristen Grant/Documents/GitHub/portfolio/_site
Incremental build: disabled. Enable with --incremental
Generating...
done in 0.595 seconds.
Auto-regeneration: enabled for 'D:/Tristen Grant/Documents/GitHub/portfolio'
Configuration file: D:/Tristen Grant/Documents/GitHub/portfolio/_config.yml
Server address: http://127.0.0.1:3000//
Server running... press ctrl-c to stop.
Regenerating: 1 file(s) changed at 2016-09-06 16:16:28 ...done in 0.521498 seconds.
Regenerating: 1 file(s) changed at 2016-09-06 16:16:30 ...error:
Error: No such file or directory - git rev-parse HEAD
Error: Run jekyll build --trace for more information.
[2016-09-06 16:19:35] ERROR Errno::ENOTSOCK: An operation was attempted on something that is not a socket.
C:/Ruby21-x64/lib/ruby/2.1.0/webrick/server.rb:170:in `select'
Terminate batch job (Y/N)? y
Terminate batch job (Y/N)? y
I'm using Ruby 2.1.5 64 bit version. RubyDevKit, Sass, Bourbon.
Any ideas how to fix this? I don't know much about Jekyll or ruby. Just starting out.
I also get this error in CMD. You should have github's desktop app installed; try running the same commands from the GitShell you get with that app.
For me that works, it saves me the trouble of installing git globally on Windows and setting it up.

Resources