I am writing a Jelastic manifest where I deploy two nodes. On one node, I have an API that I need to query in order to setup the second. Something along these lines:
type: install
name: My test
nodes:
- count: 1
cloudlets: 4
nodeGroup: auth
nodeType: docker
image: my-api:latest
- count: 1
cloudlets: 16
nodeGroup: cp
nodeType: docker
image: some-service:latest
onInstall:
- script: |
import com.hivext.api.core.utils.Transport;
try {
const body = new Transport().get("http://${nodes.auth.master.intIP}:9011/api/key", {
"Authorization": "my-api-key"
});
return { result: 0, body: body };
} catch (e) {
return {
type: "error",
message: "unknown error: " + e
};
}
In my script, when I do
const body = new Transport().get("http://www.google.com");
it works, I get the body content of the google page. However,
const body = new Transport().get("http://${nodes.auth.master.intIP}:9011/api/key", {
"Authorization": "my-api-key"
});
returns
ERROR: script.response: {"type":"error","message":"unknown error: JavaException: java.io.IOException: Failed to select a proxy"}
What am I doing wrong? How can I query my service in a script like in the above snippet? When I curl it through regular cmd, then it works:
curl -s -H "Authorization: my-api-key" http://${nodes.auth.master.intIP}:9011/api/key
Also, incidentally, where can I find the documentation of com.hivext.api.core.utils.Transport?
You can't access your environment node via internal IP from the script (at least it's not guaranteed).
As a workaround, you can access your node via public IP or endpoint.
If public IP or endpoint is not applicable in your case and your service must be accessed only internally, you can try to access your node via curl and ExecCmd API. For example:
type: install
name: My test
nodes:
- count: 1
cloudlets: 4
nodeGroup: auth
nodeType: docker
image: my-api:latest
- count: 1
cloudlets: 16
nodeGroup: cp
nodeType: docker
image: some-service:latest
onInstall:
- script: |
function execInternalApi(nodeId, url) {
let resp = api.env.control.ExecCmdById({
envName: '${env.name}',
nodeId: nodeId,
commandList: toJSON([{
command: 'curl -fSsl -H "Authorization: my-api-key" \'' + url + '\''
}])
})
if (resp.result != 0) return resp
return { result: 0, out: resp.responses[0].out }
}
let body = execInternalApi(${nodes.auth.master.id}, 'http://localhost:9011/api/key');
return { result: 0, body: body };
Related
I'm trying to find a way to run Next.js (v13.0.6) with OG image generation logic (using #vercel/og) in AWS Lambda
Everything works fine locally (in dev and prod mode) but when I try execute lambda handler getting "statusCode": 500,
It only fails for apis that involve ImageResponse (and runtime: 'experimental-edge' as a requirement for #vercel/og)
I'm pretty sure the problem is caused by Edge Runtime is not being configured correctly
There is my handler code
next build with next.config.js output: 'standalone' creates folder .next/standalone
insde standalone handler.js
const { parse } = require('url');
const NextServer = require('next/dist/server/next-server').default
const serverless = require('serverless-http');
const path = require('path');
process.env.NODE_ENV = 'production'
process.chdir(__dirname)
const currentPort = parseInt(process.env.PORT, 10) || 3000
const nextServer = new NextServer({
hostname: 'localhost',
port: currentPort,
dir: path.join(__dirname),
dev: false,
customServer: false,
conf: {...} // copied from `server.js` in there same folder
});
const requestHandler = nextServer.getRequestHandler();
// this is a AWS lambda handler that converts lambda event
// to http request that next server can process
const handler = serverless(async (req, res) => {
// const parsedUrl = parse(req.url, true);
try {
await requestHandler(req, res);
}catch(err){
console.error(err);
res.statusCode = 500
res.end('internal server error')
}
});
module.exports = {
handler
}
testing it locally with local-lambda, but getting similar results when test against AWS deployed lambda
what is confusing is that server.js (in .next/standalone) has a similar setup, it only involves http server on top of of it
update:
aws lambda logs show
ERROR [Error [CompileError]: WebAssembly.compile(): Compiling function #64 failed: invalid value type 'Simd128', enable with --experimental-wasm-simd #+3457 ]
update 2:
the first error was fixed by selecting Node 16 for AWS lambda, now getting this error
{
"errorType": "Error",
"errorMessage": "write after end",
"trace": [
"Error [ERR_STREAM_WRITE_AFTER_END]: write after end",
" at new NodeError (node:internal/errors:372:5)",
" at ServerlessResponse.end (node:_http_outgoing:846:15)",
" at ServerlessResponse.end (/var/task/node_modules/next/dist/compiled/compression/index.js:22:783)",
" at NodeNextResponse.send (/var/task/node_modules/next/dist/server/base-http/node.js:93:19)",
" at NextNodeServer.handleRequest (/var/task/node_modules/next/dist/server/base-server.js:332:47)",
" at processTicksAndRejections (node:internal/process/task_queues:96:5)",
" at async /var/task/index.js:34:5"
]
}
At the moment of writing Vercel's runtime: 'experimental-edge' seems to be unstable (run into multiple issues with it)
I ended up recreating #vercel/og lib without wasm and next.js dependencies, can be found here
and simply use it in AWS lambda. It depends on #resvg/resvg-js instead of wasm version, which uses binaries, so there should not be much perf loss comparing to wasm
Why I cant connect to cockroachdb via powershell ?
I use this command:
cockroach sql --url postgres://username#cloud-host:26257/defaultdb?sslmode=require&options=--cluster=clustername;
I get the following error: Invalid clustername 08004
but the clustername is the right one.
€:
Nodejs
//For secure connection:
// const fs = require('fs');
const { Pool } = require("pg");
// Configure the database connection.
const config = {
user: "xxxxx",
password: "xxxx",
cluster_name: "xxxx",
host: "xxxx",
database: "wxxx",
port: 26257,
ssl: {
rejectUnauthorized: false,
},
//For secure connection:
/*ssl: {
ca: fs.readFileSync('/certs/ca.crt')
.toString()
}*/
};
// Create a connection pool
const pool = new Pool(config);
router.get('/', async (req, res) => {
const client = await pool.connect();
const d = await client.query('CREATE TABLE test (id INT, name VARCHAR, desc VARCHAR);');
console.log(d);
return res.json({
message: 'BOSY'
})
Get this error:
CodeParamsRoutingFailed: rejected by BackendConfigFromParams: Invalid cluster name
Try specifying the Cluster Name before dbname like this
cockroach sql --url postgres://username#cloud-host:26257/**clustername.defaultdb**?sslmode=require
I wonder if there's an issue with special characters in the shell. Having never used PowerShell this is only a guess, but does it work if you put the URL string in quotes?
cockroach sql --url "postgres://username#cloud-host:26257/defaultdb?sslmode=require&options=--cluster=clustername";
The following snippet (Node/Typescript) utilizes Google's CloudBuild API (v1) to build a container and push to Google's Container Registry (GCR). If it is possible, what's the right way to have CloudBuild push the image to AWS ECR instead of GCR?
import { cloudbuild_v1 } from "googleapis";
[...]
const manifestLocation = `gs://${manifestFile.bucket}/${manifestFile.fullpath}`;
const buildDestination = `gcr.io/${GOOGLE_PROJECT_ID}/xxx:yyy`;
const result = await builds.create({
projectId: GOOGLE_PROJECT_ID,
requestBody: {
steps: [
{
name: 'gcr.io/cloud-builders/gcs-fetcher',
args: [
'--type=Manifest',
`--location=${manifestLocation}`
]
},
{
name: 'docker',
args: ['build', '-t', buildDestination, '.'],
}
],
images: [buildDestination]
}
})```
Yes you can by setting a custom step where you do that.
for this you can have a step with the docker image that makes the build and pushes it to AWS ECR.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', '<AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<IMAGE_NAME>', '.' ]
Here is a guide on how to use cludbuild which can be useful to you.
BAsically on your usecase you can just change the value of destination to the AWS ECR URL like this:
import { cloudbuild_v1 } from "googleapis";
[...]
const manifestLocation = `gs://${manifestFile.bucket}/${manifestFile.fullpath}`;
const buildDestination = `<AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<IMAGE_NAME>`;
const result = await builds.create({
projectId: GOOGLE_PROJECT_ID,
requestBody: {
steps: [
{
name: 'gcr.io/cloud-builders/gcs-fetcher',
args: [
'--type=Manifest',
`--location=${manifestLocation}`
]
},
{
name: 'docker',
args: ['build', '-t', buildDestination, '.'],
}
],
images: [buildDestination]
}
})```
Summary
I'm making aws lambda function by AWS SAM.
This function needs database, so I choose DynamoDB.
Now I'm setting local environment for AWS SAM and DynamoDB.
It seems that I success to set local DynamoDB, but it fails to connect when running local aws sam function.
failed to make Query API call, ResourceNotFoundException: Cannot do operations on a non-existent table
I want to know how to solve this issue.
tried
I created local table and checked test data is inserted.
❯ aws dynamodb create-table --cli-input-json file://test/positive-line-bot_table.json --endpoint-url http://localhost:8000
TABLEDESCRIPTION 1578904757.61 0 arn:aws:dynamodb:ddblocal:000000000000:table/PositiveLineBotTable PositiveLineBotTable 0 ACTIVE
ATTRIBUTEDEFINITIONS Id N
BILLINGMODESUMMARY PROVISIONED 0.0
KEYSCHEMA Id HASH
PROVISIONEDTHROUGHPUT 0.0 0.0 0 5 5
❯ aws dynamodb batch-write-item --request-items file://test/positive-line-bot_table_data.json --endpoint-url http://localhost:8000
❯ aws dynamodb list-tables --endpoint-url http://localhost:8000
TABLENAMES PositiveLineBotTable
❯ aws dynamodb get-item --table-name PositiveLineBotTable --key '{"Id":{"N":"1"}}' --endpoint-url http://localhost:8000
ID 1
NAME test
But when I run aws sam in local, it seems that it does no connect to this local DynamoDB although this table does exit in local.
❯ sam local start-api --env-vars test/env.json
Fetching lambci/lambda:go1.x Docker container image......
Mounting /Users/jpskgc/go/src/line-positive-bot/positive-line-bot as /var/task:ro,delegated inside runtime container
START RequestId: c9f19371-4fea-1e25-09ec-5f628f7fcb7a Version: $LATEST
failed to make Query API call, ResourceNotFoundException: Cannot do operations on a non-existent table
Function 'PositiveLineBotFunction' timed out after 5 seconds
Function returned an invalid response (must include one of: body, headers, multiValueHeaders or statusCode in the response object). Response received:
2020-01-13 18:46:10 127.0.0.1 - - [13/Jan/2020 18:46:10] "GET /positive HTTP/1.1" 502 -
❯ curl http://127.0.0.1:3000/positive
{"message":"Internal server error"}
I want to know how to actually connect to local DynamoDB table.
some code
Here is the function code in Go.
package main
//import
func exitWithError(err error) {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
type Item struct {
Key int
Desc string
Data map[string]interface{}
}
type Event struct {
Type string `json:"type"`
ReplyToken string `json:"replyToken"`
Source Source `json:"source"`
Timestamp int64 `json:"timestamp"`
Message Message `json:"message"`
}
type Message struct {
Type string `json:"type"`
ID string `json:"id"`
Text string `json:"text"`
}
type Source struct {
UserID string `json:"userId"`
Type string `json:"type"`
}
func handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
endpoint := os.Getenv("DYNAMODB_ENDPOINT")
tableName := os.Getenv("DYNAMODB_TABLE_NAME")
sess := session.Must(session.NewSession())
config := aws.NewConfig().WithRegion("ap-northeast-1")
if len(endpoint) > 0 {
config = config.WithEndpoint(endpoint)
}
svc := dynamodb.New(sess, config)
params := &dynamodb.ScanInput{
TableName: aws.String(tableName),
}
result, err := svc.Scan(params)
if err != nil {
exitWithError(fmt.Errorf("failed to make Query API call, %v", err))
}
items := []Item{}
err = dynamodbattribute.UnmarshalListOfMaps(result.Items, &items)
if err != nil {
exitWithError(fmt.Errorf("failed to unmarshal Query result items, %v", err))
}
var words []string
for i, item := range items {
for k, v := range item.Data {
words = append(words, v.(string))
}
}
rand.Seed(time.Now().UnixNano())
i := rand.Intn(len(words))
word := words[i]
return events.APIGatewayProxyResponse{
Body: word,
StatusCode: 200,
}, nil
}
func main() {
lambda.Start(handler)
}
Here is env.json
I try changed docker.for.mac.host.internal to my local ip address. But it does not solve.
{
"PositiveLineBotFunction": {
"DYNAMODB_ENDPOINT": "http://docker.for.mac.host.internal:8000",
"DYNAMODB_TABLE_NAME": "PositiveLineBotTable"
}
}
Here is template.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
positive-line-bot
Globals:
Function:
Timeout: 5
Resources:
PositiveLineBotFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: positive-line-bot/
Handler: positive-line-bot
Runtime: go1.x
Policies:
- DynamoDBReadPolicy:
TableName: !Ref PositiveLineBotTable
Tracing: Active
Events:
CatchAll:
Type: Api
Properties:
Path: /positive
Method: GET
Environment:
Variables:
DYNAMODB_ENDPOINT: ''
DYNAMODB_TABLE_NAME: ''
PositiveLineBotTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: 'PositiveLineBotTable'
AttributeDefinitions:
- AttributeName: 'Id'
AttributeType: 'N'
KeySchema:
- AttributeName: 'Id'
KeyType: 'HASH'
ProvisionedThroughput:
ReadCapacityUnits: '5'
WriteCapacityUnits: '5'
BillingMode: PAY_PER_REQUEST
Outputs:
PositiveLineBotAPI:
Description: 'API Gateway endpoint URL for Prod environment for PositiveLineBot'
Value: !Sub 'https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/positive/'
PositiveLineBotFunction:
Description: 'PositiveLineBot Lambda Function ARN'
Value: !GetAtt PositiveLineBotFunction.Arn
PositiveLineBotFunctionIamRole:
Description: 'Implicit IAM Role created for PositiveLineBot'
Value: !GetAtt PositiveLineBotFunction.Arn
Here is the full source code.
https://github.com/jpskgc/line-positive-bot
See this answer
Solutions consists of 2 parts:
Create a docker-network and start the dynamodb-local container and the api using that network
Adjust the endpoint appropriately.
For me, I did:
docker network create dynamodb-network
docker run -d -v "$PWD":/dynamodb_local_db -p 8000:8000 --network dynamodb-network --name dynamodb cnadiminti/dynamodb-local
sam local start-api --docker-network dynamodb-network -n env.json
and in my code I referenced the docker name as the DNS address:
const awsRegion = process.env.AWS_REGION || "us-east-2";
const options = {
region: awsRegion,
};
if (process.env.AWS_SAM_LOCAL) {
options.endpoint = "http://dynamodb:8000";
}
const docClient = new dynamodb.DocumentClient(options);
The question is, how do I register multiple nodes with consul under the same ID. I'm running a consul server in docker, and in my machine localhost I run two processes of the same HelloWorld nodejs app on my mac.
Problem: the entry for the process running at 3000 gets replaced by the process running at 3001 hence I'm ending up with one node only.
Question 2 Where do I download this GUI Client (not Web UI) from for Mac as shown in the screenshot.
Payload for node 1 port 3000
{
HTTP: 'http://My-Mac-Pro.local:3000/health',
Interval: '15s',
Name: 'My-Mac-Pro.local',
ID: 'user1'
}
Payload for node 2 port 3001
{
HTTP: 'http://My-Mac-Pro.local:3001/health',
Interval: '15s',
Name: 'My-Mac-Pro.local',
ID: 'user2'
}
nodeJS code
let http = require("http");
http.request({
method: "PUT",
hostname: env.CONSUL_HOST,
port: 8500,
path: "/v1/agent/check/register",
headers: {
"content-type": "application/json; charset=utf-8"
}
}, function(response){
if (response.statusCode == 200) {
resolve();
}
}).on("error", reject).end(JSON.stringify(body));
Expectation: See the multiple nodes under web
When you register services, each of services should register with unique service's ID.
It could be something as : ${serviceName}-${hostname}-{ip}-${port}-${process.pid()}-${uuid.v4()} or any combination of those to ensure that your service ID is unique. Different ID in registration payload is what sets consul to differ instances of same app/serviceIdentity running and they wont "override" one another.
Example of registration payload:
const id = `${ip}-${hostname}-${serviceIdentity}-${port}`;
const registrationDetails ={
Name: serviceIdentity,
ID: id,
Address: ip,
Port: parseInt(port),
Check: {
CheckID: `http-${id}`,
Name: `http-${id}`,
TLSSkipVerify: true,
HTTP: `http://${host}:${port}/health`,
Interval: '10s',
Notes: `Service http health`,
DeregisterCriticalServiceAfter: '60s',
},
};