Deploy API Gateway from Command Line - bash

I am trying for two days to deploy a POST Rest API, which would be the trigger for an already existing Lambda function on AWS, through a bash script using the aws-cli. The thing is that I am able to upload and deploy the API but it doesn't work. I tested my Lambda fuction through the test feature on AWS itself and it works. But when I call the API it returns this as header
{"x-amzn-ErrorType":"InternalServerErrorException"}
and this as body
{
"message": "Internal server error"
}
and this is the log I find on API Gateway test functionality
Execution log for request ba8d70b6-bb5e-49c5-9ff4-7927983c51d8
Thu Nov 26 16:41:55 UTC 2020 : Starting execution for request: ba8d70b6-bb5e-49c5-9ff4-7927983c51d8
Thu Nov 26 16:41:55 UTC 2020 : HTTP Method: POST, Resource Path: /provoletta
Thu Nov 26 16:41:55 UTC 2020 : Method request path: {}
Thu Nov 26 16:41:55 UTC 2020 : Method request query string: {}
Thu Nov 26 16:41:55 UTC 2020 : Method request headers: {}
Thu Nov 26 16:41:55 UTC 2020 : Method request body before transformations: {
"device": {
"uuid": "4",
"lastPosition": "4",
"lastSeen": "4",
"raspberryId": "4",
"roomNumber": "34"
}
}
Thu Nov 26 16:41:55 UTC 2020 : Endpoint request URI: https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/arn:aws:lambda:us-east-1:***:function:provoletta/invocations
Thu Nov 26 16:41:55 UTC 2020 : Endpoint request headers: {x-amzn-lambda-integration-tag=ba8d70b6-bb5e-49c5-9ff4-7927983c51d8, Authorization=************************************************************************************************************************************************************************************************************************************************************************************************************************53a5a7, X-Amz-Date=20201126T164155Z, x-amzn-apigateway-api-id=eca0u6a3ed, X-Amz-Source-Arn=arn:aws:execute-api:us-east-1:***:eca0u6a3ed/test-invoke-stage/POST/provoletta, Accept=application/json, User-Agent=AmazonAPIGateway_eca0u6a3ed, X-Amz-Security-Token=*** [TRUNCATED]
Thu Nov 26 16:41:55 UTC 2020 : Endpoint request body after transformations: {
"device": {
"uuid": "4",
"lastPosition": "4",
"lastSeen": "4",
"raspberryId": "4",
"roomNumber": "34"
}
}
Thu Nov 26 16:41:55 UTC 2020 : Sending request to https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/arn:aws:lambda:us-east-1:***:function:provoletta/invocations
Thu Nov 26 16:41:55 UTC 2020 : Execution failed due to configuration error: Invalid permissions on Lambda function
Thu Nov 26 16:41:55 UTC 2020 : Method completed with status: 500
This is how I am trying to create and build the API through bash script
#!/bin/sh
api_id=$(aws apigateway create-rest-api --name 'provoletta' --query 'id' --output text)
resource_id=$(aws apigateway get-resources --rest-api-id $api_id --query 'items' --output text)
resource_id=${resource_id::-2}
result_id=$(aws apigateway create-resource --rest-api-id $api_id --parent-id $resource_id --path-part provoletta --query 'id' --output text)
aws apigateway put-method \
--rest-api-id $api_id \
--region $AWS_REGION \
--resource-id $result_id \
--http-method POST \
--authorization-type "NONE"
aws apigateway put-method-response \
--region $AWS_REGION \
--rest-api-id $api_id \
--resource-id $result_id \
--http-method POST \
--status-code 200
aws apigateway put-integration \
--region $AWS_REGION \
--rest-api-id $api_id \
--resource-id $result_id \
--http-method POST \
--type AWS \
--integration-http-method POST \
--uri arn:aws:apigateway:$AWS_REGION:lambda:path/2015-03-31/functions/arn:aws:lambda:$AWS_REGION:$ACCOUNT_ID:function:provoletta/invocations \
--request-templates '{"application/x-www-form-urlencoded":"{\"body\": $input.json(\"$\")}"}'
aws apigateway put-integration-response \
--region $AWS_REGION \
--rest-api-id $api_id \
--resource-id $result_id \
--http-method POST \
--status-code 200 \
--selection-pattern ""
aws apigateway create-deployment --rest-api-id $api_id --stage-name provoletta
If I create an API for the same Lambda function on API Gateway console itself it works without problems. So, what's the problem with this script?

It looks like a permission problem. Specifically, it looks like you did not give permission for the API Gateway to invoke your Lambda function:
Execution failed due to configuration error: Invalid permissions on Lambda function.
To fix this you should "add permissions" to your Lambda as described in the AWS CLI documentation.
Based on your "code", this might already do the trick, assuming your Lambda function is called provoletta:
aws lambda add-permission \
--region $AWS_REGION \
--function-name provoletta \
--action lambda:InvokeFunction \
--statement-id AllowGatewayToInvokeFunction \
--principal apigateway.amazonaws.com

Related

aws cli hangs when in background

I have a command which runs pretty well when I just run it:
time aws sqs send-message --queue-url https://my_sqs_url --message-body "$(date "+%H:%M:%S_%N")"
{
"MD5OfMessageBody": "a19f365993d45d4885f7f15bce8aac97",
"MessageId": "30971fa7-d8ac-4540-9541-aebc38598856"
}
real 0m1.321s
user 0m1.174s
sys 0m0.117s
If I want to run in in background then the sqs message is sent however the process hangs infinitely (or at least I'm not patient enough to see when it eventually ends):
aws sqs send-message --queue-url https://my_sqs_url --message-body "$(date "+%H:%M:%S_%N")" &
[1] 9561
During that I see two processes instead of one:
ps -eFH | grep "aws sqs"
root 9561 2980 0 2210 912 1 09:29 pts/0 00:00:00 aws sqs send-message --queue-url https://my_sqs_url --message-body 09:29:30_009996044
root 9563 9561 0 63048 59172 1 09:29 pts/0 00:00:01 aws sqs send-message --queue-url https://my_sqs_url --message-body 09:29:30_009996044
The questions: why it hangs and how to do it properly?
This should work :
time aws sqs send-message --queue-url https://my_sqs_url --message-body "$(date "+%H:%M:%S_%N")" & wait $!

flux deployment error X509 certificate signed by unknown authority

My aim is to deploy a container-labelling-webhook solution onto my AKS cluster using flux CD v2. Once I have it operational, I want to rollout to multiple clusters.
Command used to bootstrap AKS Cluster(Flux Installation I mean)
flux bootstrap git --url=https://github.xxxxxx.com/user1/test-repo.git --username=$GITHUB_USER --password=$GITHUB_TOKEN --token-auth=true --path=clusters/my-cluster
✔ Kustomization reconciled successfully
► confirming components are healthy
✔ helm-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ notification-controller: deployment ready
✔ source-controller: deployment ready
✔ all components are healthy
Now, I am trying to deploy my helm charts, note, helm chart deployment by itself works fine, not via Flux though.
flux create source helm label-webhook --url https://github.xxxxxx.com/user1/test-repo/tree/main/chart --namespace label-webhook --cert-file=./tls/label-webhook.pem --key-file=./tls/label-webhook-key.pem --ca-file=./tls/ca.pem --verbose
✚ generating HelmRepository source
► applying secret with repository credentials
✔ authentication configured
► applying HelmRepository source
✔ source created
◎ waiting for HelmRepository source reconciliation
✗ failed to fetch Helm repository index: failed to cache index to temporary file: Get "https://github.xxxxxx.com/user1/test-repo/tree/main/chart/index.yaml": x509: certificate signed by unknown authority
I am generating certs with the process below:
cat << EOF > ca-config.json
{
"signing": {
"default": {
"expiry": "43830h"
},
"profiles": {
"default": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "43830h"
}
}
}
}
EOF
cat << EOF > ca-csr.json
{
"hosts": [
"cluster.local"
],
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "AU",
"L": "Melbourne",
"O": "xxxxxx",
"OU": "Container Team",
"ST": "aks1-dev"
}
]
}
EOF
docker run -it --rm -v ${PWD}:/work -w /work debian bash
apt-get update && apt-get install -y curl &&
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o /usr/local/bin/cfssl && \
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o /usr/local/bin/cfssljson && \
chmod +x /usr/local/bin/cfssl && \
chmod +x /usr/local/bin/cfssljson
cfssl gencert -initca ca-csr.json | cfssljson -bare /tmp/ca
cfssl gencert \
-ca=/tmp/ca.pem \
-ca-key=/tmp/ca-key.pem \
-config=ca-config.json \
-hostname="mutation-label-webhook,mutation-label-webhook.label-webhook.svc.cluster.local,mutation-label-webhook.label-webhook.svc,localhost,127.0.0.1" \
-profile=default \
ca-csr.json | cfssljson -bare /tmp/label-webhook
root#91bc7986cb94:/work# ls -alrth /tmp/
total 32K
drwxr-xr-x 1 root root 4.0K Jul 29 04:42 ..
-rw-r--r-- 1 root root 2.0K Jul 29 04:43 ca.pem
-rw-r--r-- 1 root root 1.8K Jul 29 04:43 ca.csr
-rw------- 1 root root 3.2K Jul 29 04:43 ca-key.pem
-rw-r--r-- 1 root root 2.2K Jul 29 04:43 label-webhook.pem
-rw-r--r-- 1 root root 1.9K Jul 29 04:43 label-webhook.csr
-rw------- 1 root root 3.2K Jul 29 04:43 label-webhook-key.pem
drwxrwxrwt 1 root root 4.0K Jul 29 04:43 .
root#91bc7986cb94:/work#
root#83faa77cd5f6:/work# cp -apvf /tmp/* .
'/tmp/ca-key.pem' -> './ca-key.pem'
'/tmp/ca.csr' -> './ca.csr'
'/tmp/ca.pem' -> './ca.pem'
'/tmp/label-webhook-key.pem' -> './label-webhook-key.pem'
'/tmp/label-webhook.csr' -> './label-webhook.csr'
'/tmp/label-webhook.pem' -> './label-webhook.pem'
root#83faa77cd5f6:/work# pwd
/work
chmod -R 777 tls/
helm upgrade --install mutation chart --namespace label-webhook --create-namespace --set secret.cert=$(cat tls/label-webhook.pem | base64 | tr -d '\n') --set secret.key=$(cat tls/label-webhook-key.pem | base64 | tr -d '\n') --set secret.cabundle=$(openssl base64 -A <"tls/ca.pem")
I am totally confused as to how to get flux working?
Flux doesn't trust the certificate presented by your git server github.xxxxxx.com
Quick workaround is to use --insecure-skip-tls-verify flag as described here: https://fluxcd.io/docs/cmd/flux_bootstrap_git/
Full command:
flux create source helm label-webhook --url https://github.xxxxxx.com/user1/test-repo/tree/main/chart --namespace label-webhook --cert-file=./tls/label-webhook.pem --key-file=./tls/label-webhook-key.pem --ca-file=./tls/ca.pem --verbose --insecure-skip-tls-verify
It's interesting there wasn't problem with flux bootstrap git step but it probably just create configuration for repository in this step and not establish connection to it.
Whatever certificates you are generating don't have anything to do with your GIT server TLS certificate. Seems you're doing some admission webhook magic but the certs you generate there have nothing in common with github.xxxxxx.com certificate so there is no need to specify if in --ca-file flag.
Permanent solution is to get the CA certificate that signed the github.xxxxxx.com so you need to ask the administrators of the GIT server to provide you CA file and specify that one in --ca-file flag. Not the one you created for your webhook experiments.

Single insert is okay, but bulk import throws error of type "not_x_content_exception"

I'm trying to import data to Elasticsearch from JSON file which contains one document per line. Only data.
Here is how I'm creating index and trying to insert one document:
DELETE /tests
PUT /tests
{}
PUT /tests/test/_mapping
{
"test":{
"properties":{
"env":{"type":"keyword"},
"uid":{"type":"keyword"},
"ok":{"type":"boolean"}
}
}
}
POST /tests/test
{"env":"dev", "uid":12346, "ok":true}
GET /tests/_search
{"query":{"match_all":{}}}
Everything works fine, no errors, document is indexed correctly and could be found in ES.
Now let's try to do it using elasticdump.
Here is content of file I'm trying to import:
cat ./data.json
{"env":"prod","uid":1111,"ok":true}
{"env":"prod","uid":2222,"ok":true}
Here is how I'm trying to import:
elasticdump \
--input="./data.json" \
--output="http://elk:9200" \
--output-index="tests/test" \
--debug \
--limit=10000 \
--headers='{"Content-Type": "application/json"}' \
--type=data
But I got error Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes.
Here is full output:
root#node-tools:/data# elasticdump \
> --input="./s.json" \
> --output="http://elk:9200" \
> --output-index="tests/test" \
> --debug \
> --limit=10000 \
> --headers='{"Content-Type": "application/json"}' \
> --type=data
Tue, 16 Apr 2019 16:26:28 GMT | starting dump
Tue, 16 Apr 2019 16:26:28 GMT | got 2 objects from source file (offset: 0)
Tue, 16 Apr 2019 16:26:28 GMT [debug] | discovered elasticsearch output major version: 6
Tue, 16 Apr 2019 16:26:28 GMT [debug] | thisUrl: http://elk:9200/tests/test/_bulk, payload.body: "{\"index\":{\"_index\":\"tests\",\"_type\":\"test\"}}\nundefined\n{\"index\":{\"_index\":\"tests\",\"_type\":\"test\"}}\nundefined\n"
{ _index: 'tests',
_type: 'test',
_id: 'ndj4JmoBindjidtNmyKf',
status: 400,
error:
{ type: 'mapper_parsing_exception',
reason: 'failed to parse',
caused_by:
{ type: 'not_x_content_exception',
reason:
'Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes' } } }
{ _index: 'tests',
_type: 'test',
_id: 'ntj4JmoBindjidtNmyKf',
status: 400,
error:
{ type: 'mapper_parsing_exception',
reason: 'failed to parse',
caused_by:
{ type: 'not_x_content_exception',
reason:
'Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes' } } }
Tue, 16 Apr 2019 16:26:28 GMT | sent 2 objects to destination elasticsearch, wrote 0
Tue, 16 Apr 2019 16:26:28 GMT | got 0 objects from source file (offset: 2)
Tue, 16 Apr 2019 16:26:28 GMT | Total Writes: 0
Tue, 16 Apr 2019 16:26:28 GMT | dump complete
What am I doing wrong? Why manual insert works fine while _batch is throwing errors. Any ideas?
UPD
Tried using python's elasticsearch_loader - works fine.
elasticsearch_loader \
--es-host="http://elk:9200" \
--index="tests" \
--type="test" \
json --json-lines ./data.json
Some additional info could be found here: https://github.com/taskrabbit/elasticsearch-dump/issues/534
Json documents should be provided as _source.
WAS: {"env":"prod","uid":1111,"ok":true}
NOW: {"_source":{"env":"prod","uid":1111,"ok":true}}
This could be made on fly by elasticdump using --transform argument:
elasticdump \
--input="./data.json" \
--output="http://elk:9200" \
--output-index="tests/test" \
--debug \
--limit=10000 \
--type=data \
--transform="doc._source=Object.assign({},doc)"
Thanks to #ferronrsmith from github.
More details here: https://github.com/taskrabbit/elasticsearch-dump/issues/534

Create API gateway in localstack

I was able to setup localstack (https://github.com/atlassian/localstack) and also create lambda function in it (using create-function ... command). However, I couldnt find a way to create an APIGateway in localstack so that the lambda function can be called using it.
Basically, I need an APIGateway(and its arn), so that using that the lambda function can be called.
Walkthrough for creating a NodeJS Lambda together with API Gateway per CLI:
First we create a simple NodeJS Lambda:
const apiTestHandler = (payload, context, callback) => {
console.log(`Function apiTestHandler called with payload ${JSON.stringify(payload)}`);
callback(null, {
statusCode: 201,
body: JSON.stringify({
somethingId: payload.pathParameters.somethingId
}),
headers: {
"X-Click-Header": "abc"
}
});
}
module.exports = {
apiTestHandler,
}
Put that into a zip File called apiTestHandler.zip and upload it to localstack:
aws lambda create-function \
--region us-east-1 \
--function-name api-test-handler \
--runtime nodejs6.10 \
--handler index.apiTestHandler \
--memory-size 128 \
--zip-file fileb://apiTestHandler.zip \
--role arn:aws:iam::123456:role/role-name --endpoint-url=http://localhost:4574
Now we can create our Rest-Api:
aws apigateway create-rest-api --region us-east-1 --name 'API Test' --endpoint-url=http://localhost:4567
This gives the following response:
{
"name": "API Test",
"id": "487109A-Z548",
"createdDate": 1518081479
}
With the ID we got here, we can ask for its parent-ID:
aws apigateway get-resources --region us-east-1 --rest-api-id 487109A-Z548 --endpoint-url=http://localhost:4567
Response:
{
"items": [
{
"path": "/",
"id": "0270A-Z23550",
"resourceMethods": {
"GET": {}
}
}
]
}
Now we have everything to create our resource together with its path:
aws apigateway create-resource \
--region us-east-1 \
--rest-api-id 487109A-Z548 \
--parent-id 0270A-Z23550 \
--path-part "{somethingId}" --endpoint-url=http://localhost:4567
Response:
{
"resourceMethods": {
"GET": {}
},
"pathPart": "{somethingId}",
"parentId": "0270A-Z23550",
"path": "/{somethingId}",
"id": "0662807180"
}
The ID we got here is needed to create our linked GET Method:
aws apigateway put-method \
--region us-east-1 \
--rest-api-id 487109A-Z548 \
--resource-id 0662807180 \
--http-method GET \
--request-parameters "method.request.path.somethingId=true" \
--authorization-type "NONE" \
--endpoint-url=http://localhost:4567
We are almost there - one of the last things to do is to create our integration with the already uploaded lambda:
aws apigateway put-integration \
--region us-east-1 \
--rest-api-id 487109A-Z548 \
--resource-id 0662807180 \
--http-method GET \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:000000000000:function:api-test-handler/invocations \
--passthrough-behavior WHEN_NO_MATCH \
--endpoint-url=http://localhost:4567
Last but not least: Deploy our API to our desired stage:
aws apigateway create-deployment \
--region us-east-1 \
--rest-api-id 487109A-Z548 \
--stage-name test \
--endpoint-url=http://localhost:4567
Now we can test it:
curl http://localhost:4567/restapis/487109A-Z548/test/_user_request_/HowMuchIsTheFish
Response:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 34 100 34 0 0 9 0 0:00:03 0:00:03 --:--:-- 9
{"somethingId":"HowMuchIsTheFish"}
I hope this helps.
Hint 1: For easier use I recommend to install AWSCLI Local ( https://github.com/localstack/awscli-local ) - with this tool you can use the command "awslocal" and don't have to type "--endpoint-url= ..." for each command
Walkthrough for using Serverless Framework and Localstack:
You can also use the Serverless Framework (https://serverless.com/).
First install it via npm:
npm install serverless -g
Now you can create a sample application based on a nodejs-aws template:
serverless create --template aws-nodejs
In order to have an HTTP endpoint, you have to edit the serverless.yml and add the corresponding event :
functions:
hello:
handler: handler.hello
events:
- http:
path: ping
method: get
In order to run this against your localstack installation you have to use the serverless-localstack plugin ( https://github.com/temyers/serverless-localstack):
npm install serverless-localstack
Now you have to edit your serverless.yml again, add the plugin and adjust your endpoints. In my case localstack is running inside the Docker toolbox, so it's IP is 192.168.99.100 - you may have to change this to localhost, depending on your use:
plugins:
- serverless-localstack
custom:
localstack:
debug: true
stages:
- local
- dev
host: http://192.168.99.100
endpoints:
S3: http://192.168.99.100:4572
DynamoDB: http://192.168.99.100:4570
CloudFormation: http://192.168.99.100:4581
Elasticsearch: http://192.168.99.100:4571
ES: http://192.168.99.100:4578
SNS: http://192.168.99.100:4575
SQS: http://192.168.99.100:4576
Lambda: http://192.168.99.100:4574
Kinesis: http://192.168.99.100:4568
Now you can try to deploy it:
serverless deploy --verbose --stage local
This will create an S3 bucket, upload your lambda and create a cloudformation stack. However, the process will fail due to some inconsistencies of localstack when compared against AWS. Don't be dismayed though, the created cloudformation template works fine and you just need an additional request and you are done:
awslocal cloudformation update-stack --template-body file://.serverless/cloudformation-template-update-stack.json --stack-name aws-nodejs-local
Now your lambda is deployed and can be tested:
curl http://192.168.99.100:4567/restapis/75A-Z278430A-Z/local/_user_request_/ping
Response:
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed
100 364 100 364 0 0 111 0 0:00:03 0:00:03 --:--:-- 111
{"message":"Go Serverless v1.0! Your function executed successfully!","input":{"body":null,"headers":{"host":"192.168.99.100:4567","accept":"*/*","user-agent":"curl/7.49.1"},"resource":"/restapis/75A-Z278430A-Z/local/_user_request_/ping","queryStringParameters":{},"httpMethod":"GET","stageVariables":{},"path":"/ping","pathParameters":{},"isBase64Encoded":false}}
Hope this helps.
Looks like there is an open issue related to setting up API Gateway with localstack on GitHub:
https://github.com/localstack/localstack/issues/129
You could try following the steps provided in the answer there.
Copied from the GitHub issue:
"""
One option would be to use the serverless framework (https://github.com/serverless/serverless). Otherwise, you can call the LocalStack services directly (via the CLI or an SDK) to create an API Gateway resource+method+integration, and connect them to your Lambda function.
Here are a few pointers that might be helpful:
https://ig.nore.me/2016/03/setting-up-lambda-and-a-gateway-through-the-cli/ (the "Creating a role" part can be skipped)
https://github.com/atlassian/localstack/issues/101
https://github.com/temyers/serverless-localstack
"""

Curl POST call for server API which uploads a file

I want to translate a series of POSTMAN calls into bash in order to create a script. Super easy till now where I want to POST an xlsx file with roles with form-data.I use this script:
curl -i -X POST \
-H 'externalTenantId: 326c1027-bf20-4cd6-ac83-33581c50249b' \
-H "uid: user" \
-H "Content-Type: multipart/form-data" \
-F 'payload={
"importMode": "OVERWRITE",
"tenantId": "326c1027-bf20-4cd6-ac83-33581c50249b",
"file": "roles.xlsx"
}' \
-F 'file=#roles.xlsx' \
"http://server:8080/iamsvc/batchImport/v2/direct/roles"
This is the postman call which works:
POST http://server:8080/iamsvc/batchImport/v2/direct/roles
Headers:
uid: user#domain.com
externalTenantId: 4cd6-ac83-33581c50249b-327522
Payload:
{
"file": [Excel file to be uploaded],
"importMode": "OVERWRITE",
"tenantId": "4cd6-ac83-33581c50249b-327522"
}
This is the error that I get:
HTTP/1.1 100 Continue
HTTP/1.1 400 Bad Request
Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=0BA814182C258E1DFE62ACF98409F9CD; Path=/iamsvc/;
Secure; HttpOnly
Content-Length: 0
Date: Mon, 26 Sep 2016 12:59:50 GMT
Connection: close
The answer was in curl --manual and it's working like this:
curl -i -X POST -H "uid: user" -H "externalTenantId: 326c1027-bf20-4cd6-ac83-33581c50249b" -F "file=#/home/user/zscripts/iamapi/roles.xlsx" -F "importMode=OVERWRITE" -F "tenantId=326c1027-bf20-4cd6-ac83-33581c50249b" http://server:8080/iamsvc/batchImport/v2/direct/roles

Resources