I try to use Lambda function splunk-firehose-flowlogs-processor to transform data from vpc flowlogs before sending them to splunk. I do it according to amazon blogpost: https://aws.amazon.com/blogs/big-data/ingest-vpc-flow-logs-into-splunk-using-amazon-kinesis-data-firehose/
I used test data and it worked but suddenly I started to get an Lambda function error:
[ERROR] KeyError: 'message'
Traceback (most recent call last):
File "/var/task/app.py", line 127, in lambda_handler
records = list(processRecords(event['records']))
File "/var/task/app.py", line 74, in processRecords
joinedData = ''.join(transformLogEvent(data))
File "/var/task/app.py", line 66, in transformLogEvent
return log_event['message'] + '\n'
I'm really confused and I think maybe there is someone who knows why this Lambda function gives errors.
Related
I've been having a problem lately where I can't access the value of any variable through their name. It's not a connection problem since I can read the value from the read() function just fine. However whenever I use the read_by_name function there's always the same error pyads.pyads_ex.ADSError: ADSError: Unknown Error (857212673).
Because it's an unknown error I haven't been able to find documentation on it. I'm using a BC9120 as a PLC and TwinCat 2. Does anyone know what might be causing this?
Code:
if self.plc.is_open == False:
self.plc.open()
print(self.plc.read(pyads.INDEXGROUP_MEMORYBIT, 0*8 + 0, pyads.PLCTYPE_BOOL))
print(self.plc.read_by_name("Test.M0", pyads.PLCTYPE_BOOL))
Logs:
False
Traceback (most recent call last):
File "c:\Users\MAP9AV\Documents\Banca\Modelo Preditivo\Thread_Sup.py", line 50, in run
print(self.plc.read_by_name("Test.M0", pyads.PLCTYPE_BOOL))
File "C:\Users\MAP9AV\AppData\Local\Programs\Python\Python39\lib\site-packages\pyads\connection.py", line 540, in read_by_name
return adsSyncReadByNameEx(
File "C:\Users\MAP9AV\AppData\Local\Programs\Python\Python39\lib\site-packages\pyads\pyads_ex.py", line 1154, in adsSyncReadByNameEx
handle = adsGetHandle(port, address, data_name)
File "C:\Users\MAP9AV\AppData\Local\Programs\Python\Python39\lib\site-packages\pyads\pyads_ex.py", line 890, in adsGetHandle
handle = adsSyncReadWriteReqEx2(
File "C:\Users\MAP9AV\AppData\Local\Programs\Python\Python39\lib\site-packages\pyads\pyads_ex.py", line 774, in adsSyncReadWriteReqEx2
raise ADSError(err_code)
pyads.pyads_ex.ADSError: ADSError: Unknown Error (857212673).
Hi I was trying to use AWS Rekognition -> Lambda -> AWS API -> Bubble.io
I want to pass a image base64 code to process the image recognition.
Here is the lambda code:
import json
import boto3
import base64
def lambda_handler(event, context):
def detect_labels():
# 1. create a client object, to connect to rekognition
client=boto3.client('rekognition')
image = base64.b64decode(event['face'])
# 4. call Rekognition, store result in 'response'
response = client.detect_labels(
Image={
'Bytes': image
},
MaxLabels=20,
)
#6. Return response from function
return response
# Call detect_labels
response = detect_labels()
# Return results to API gateway
return {
'statusCode': 200,
'body': json.dumps(response)
}
and if I test it within the lambda test with config:
{
"face": "<imgbase64codehere>"
}
It works fine and return success message.
However I set the same thing in Bubble.io
Setting img. It wont work.
I check the Cloudwatch, the error is:
[ERROR] KeyError: 'face'
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 30, in lambda_handler
response = detect_labels()
File "/var/task/lambda_function.py", line 14, in detect_labels
image = base64.b64decode(event['face'])
[ERROR] KeyError: 'face' Traceback (most recent call last): File "/var/task/lambda_function.py", line 30, in lambda_handler response = detect_labels() File "/var/task/lambda_function.py", line 14, in detect_labels image = base64.b64decode(event['face'])
I am sure there is not connection problem (if I dont use JSON it works fine)
So what do I do wrong in the setting?
Thanks for your help in advance
In your test you are sending an input of just a JSON request with face.
It looks like the implementation you've setup uses Amazon API Gateway. If you are doing that your Lambda function might receive the event in a different format depending on how you've implemented the API Gateway integration.
The Lambda developer guide has an example of the JSON format. You will probably want to access the body attribute.
It's very often for me to get error when trying to stop or start ec2-instances through AWS Lambda. Quite strange for me, because sometimes it works (for both start and stop ec2-instances).
The error I get is like below. When I run test on Lambda console, most of the time it successfully executed. But when I run it through AWS Event Rules (CloudWatch), it's very often the function got fail.
This is my code on line 48
[ERROR] ConnectTimeoutError: Connect timeout on endpoint URL: "https://ec2.ap-southeast-2.amazonaws.com/"
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 48, in lambda_handler
if stop_ec2_instances():
File "/var/task/lambda_function.py", line 155, in stop_ec2_instances
ec2_client.stop_instances(InstanceIds=ec2_instances)
File "/var/task/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/task/botocore/client.py", line 621, in _make_api_call
http, parsed_response = self._make_request(
File "/var/task/botocore/client.py", line 641, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/var/task/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/var/task/botocore/endpoint.py", line 136, in _send_request
while self._needs_retry(attempts, operation_model, request_dict,
File "/var/task/botocore/endpoint.py", line 253, in _needs_retry
responses = self._event_emitter.emit(
File "/var/task/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/var/task/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/var/task/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/var/task/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/var/task/botocore/retryhandler.py", line 250, in __call__
should_retry = self._should_retry(attempt_number, response,
File "/var/task/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/var/task/botocore/retryhandler.py", line 316, in __call__
checker_response = checker(attempt_number, response,
File "/var/task/botocore/retryhandler.py", line 222, in __call__
return self._check_caught_exception(
File "/var/task/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/var/task/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/var/task/botocore/endpoint.py", line 269, in _send
return self.http_session.send(request)
File "/var/task/botocore/httpsession.py", line 287, in send
raise ConnectTimeoutError(endpoint_url=request.url, error=e)
This is my code for start & stop the instances:
Even, I already move instantiation og ec2_res ec2_client inside the function but it did not help.,
def start_ec2_instances():
try:
ec2_res = boto3.resource('ec2', region_name="ap-southeast-2")
ec2_client = boto3.client('ec2', region_name="ap-southeast-2")
ec2_client.start_instances(InstanceIds=ec2_instances)
for ec2_id in ec2_instances:
instance = ec2_res.Instance(id=ec2_id)
logger.info("Waiting instance " + ec2_id + " to start")
instance.wait_until_running()
return True
except bex.ClientError as err:
logger.error(err.response['Error']['Message'])
return False
def stop_ec2_instances():
try:
ec2_res = boto3.resource('ec2', region_name="ap-southeast-2")
ec2_client = boto3.client('ec2', region_name="ap-southeast-2")
ec2_client.stop_instances(InstanceIds=ec2_instances)
for ec2_id in ec2_instances:
instance = ec2_res.Instance(id=ec2_id)
logger.info("Waiting instance " + ec2_id + " to stop")
instance.wait_until_stopped()
return True
except bex.ClientError as err:
logger.error(err.response['Error']['Message'])
return False
If any one of you ever face the same guys?
Thanks
Edit: I set function timeout to 8 minutes. In normal condition, time required to execute the function is less than 5 minutes.
Additional note:
Sometimes I work using VPN (south-east-2) in which this VPN is in a different region from the region I live. Instances (and another components) also deployed on this region VPN (south-east-2).
Your code to start and stop the instance looks right to me. The timeout is happening because the time taken to perform your operation is not getting completed in the configured timeout for your lambda function.
You can measure what is the time taken for your function by simply subtracting the time between start and stop of the function.
The default timeout is 3 seconds. So you should consider increasing this timeout interval for your lambda function. Say to 5 minutes.
Please note that the maximum value for this timeout is 300 seconds(15 minutes) and you can not configure value higher than this. I am sure the above code would complete within that limit and hence it should not be a problem for you.
How do I increase my timeout interval for my lambda function?
There are multiple ways to do this. By AWS CLI, AWS console, or probably some other way.
In AWS Console you can do like this:
Click on the Save button after doing this change.
Hope this helps.
There was no problem with this code. But one day, suddenly, an error started to appear. Please help me. (I used the translator.) Thanks
I have no idea.. Sorry.
#client.command(pass_context=True)
async def test(ctx):
server = ctx.message.server
name = "NAME"
await client.create_channel(server, "NAME", type=discord.ChannelType.voice)
This is error code.
Traceback (most recent call last):
File "/home/ydepong93/.local/lib/python3.6/site-packages/discord/ext/commands/core.py", line 50, in wrapped
ret = yield from coro(args, **kwargs)
File "feelblue.py", line 5790, in test
await client.create_channel(server, name, type=discord.ChannelType.text)
File "/home/ydepong93/.local/lib/python3.6/site-packages/discord/client.py", line 2163, in create_channel
data = yield from self.http.create_channel(server.id, name, str(type), permission_overwrites=perms)
File "/home/ydepong93/.local/lib/python3.6/site-packages/discord/http.py", line 200, in request
raise HTTPException(r, data)
discord.errors.HTTPException: BAD REQUEST (status code: 400)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ydepong93/.local/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 846, in process_commands
yield from command.invoke(ctx)
File "/home/ydepong93/.local/lib/python3.6/site-packages/discord/ext/commands/core.py", line 374, in invoke
yield from injected(ctx.args, **ctx.kwargs)
File "/home/ydepong93/.local/lib/python3.6/site-packages/discord/ext/commands/core.py", line 54, in wrapped
raise CommandInvokeError(e) from e
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: HTTPException: BAD REQUEST (status code: 400)
Have a look here: https://discordpy.readthedocs.io/en/latest/api.html?highlight=create_voice#discord.Guild.create_voice_channel
You must use
await ctx.guild.create_voice_channel(**kwargs)
Instead of your code. Next times better if you give the docs a look before asking here.
I am trying to enumerate Azure AD users from an azure subscription, with this code:
WORKING_DIRECTORY = os.getcwd()
TENANT_ID = "REDACTED_AZURE_ID_OF_MY_AZURE_AD_USER"
AZURE_AUTH_LOCATION = os.path.join(WORKING_DIRECTORY, "mycredentials.json") # from: az ad sp create-for-rbac --sdk-auth > mycredentials.json
# I've tried with get_client_from_cli_profile() while logged in azure CLI
# I've tried with and without parameters auth_path and tenant_id
rbac_client = get_client_from_auth_file(GraphRbacManagementClient,auth_path=AZURE_AUTH_LOCATION, tenant_id=TENANT_ID)
# Try to list users
for user in rbac_client.users.list():
pprint(user.__dict__)
As I've detailed in the comments, I've tried to fix the issue with a couple of unsuccessful attempts, here is the stacktrace
/home/guillaumedsde/.virtualenvs/champollion/bin/python /home/guillaumedsde/PycharmProjects/champollion/champollion/champollion.py
Traceback (most recent call last):
File "/home/guillaumedsde/PycharmProjects/champollion/champollion/champollion.py", line 582, in <module>
gitlab_project_member.access_level)
File "/home/guillaumedsde/PycharmProjects/champollion/champollion/champollion.py", line 306, in create_role_assignment
"principal_id": get_user_azure_id(user)} # get_user_azure_id(user)} # TODO
File "/home/guillaumedsde/PycharmProjects/champollion/champollion/champollion.py", line 329, in get_user_azure_id
for user in rbac_client.users.list():
File "/home/guillaumedsde/.virtualenvs/champollion/lib/python3.6/site-packages/msrest/paging.py", line 131, in __next__
self.advance_page()
File "/home/guillaumedsde/.virtualenvs/champollion/lib/python3.6/site-packages/msrest/paging.py", line 117, in advance_page
self._response = self._get_next(self.next_link)
File "/home/guillaumedsde/.virtualenvs/champollion/lib/python3.6/site-packages/azure/graphrbac/operations/users_operations.py", line 158, in internal_paging
raise models.GraphErrorException(self._deserialize, response)
azure.graphrbac.models.graph_error.GraphErrorException: Operation returned an invalid status code 'Not Found'
Process finished with exit code 1
Was a bug fixed in azure-common 1.1.13
https://pypi.org/project/azure-common/1.1.13/
You can now simply do that (with no tenant ID)
rbac_client = get_client_from_auth_file(GraphRbacManagementClient,auth_path=AZURE_AUTH_LOCATION)
I took this opportunity to fix the CLI version of this method as well.
(I own this code at MS)