AWS IOT button toggle for IFTTT - aws-lambda

I have an AWS IOT button set up and working with IFTTT and SmartLife to turn a device on/off. Currently I have it set up to use single and double click to turn on and off, because IFTTT doesn't seem to have a toggle app (at least, not for use with SmartLife.)
How can I make it a toggle, so I can use a single click to alternately turn on and off?
Looking for a free solution.

There is a solution using apilio, but it's not a free solution: Create a toggle between two actions in IFTTT .
For a free solution, use DynamoDB from Lambda to save the button state, and invert the state each invocation. It either sends "IotButton2" or "IotButton2Off" to IFTTT.
'''
Example Lambda IOT button IFTTT toggle
Test payload:
{
"serialNumber": "GXXXXXXXXXXXXXXXXX",
"batteryVoltage": "990mV",
"clickType": "SINGLE" # or "DOUBLE" or "LONG"
}
'''
from __future__ import print_function
import boto3
import json
import logging
import urllib2
import boto3
from botocore.exceptions import ClientError
logger = logging.getLogger()
logger.setLevel(logging.INFO)
maker_key = 'xxxxxxxxxxxxxxxxx' # change this to your Maker key
def get_button_state(db, name):
table = db.Table('toggles')
try:
response = table.get_item(Key={'name': name})
except ClientError as e:
print(e.response['Error']['Message'])
else:
# response['item'] == {u'name': u'IotButton2', u'on': False}
if 'Item' in response:
return response['Item']['on']
return False
def set_button_state(db, name, state):
table = db.Table('toggles')
try:
response = table.put_item(Item={'name': name, 'on': state})
except ClientError as e:
print(e.response['Error']['Message'])
def lambda_handler(event, context):
logger.info('Received event: ' + json.dumps(event))
db = boto3.resource('dynamodb')
maker_event = "IotButton2"
# maker_event += ":" + event["clickType"]
state = get_button_state(db, maker_event)
logger.info(maker_event + " state = " + ("on" if state else "off"))
response = set_button_state(db, maker_event, not state)
if state:
maker_event += "Off"
logger.info('Maker event: ' + maker_event)
url = 'https://maker.ifttt.com/trigger/%s/with/key/%s' % (maker_event, maker_key)
f = urllib2.urlopen(url)
response = f.read()
f.close()
logger.info('"' + maker_event + '" event has been sent to IFTTT Maker channel')
return response
The above version responds to any type of click (single, double, long.) You can control 3 different switches by uncommenting this line:
maker_event += ":" + event["clickType"]
which would translate to these IFTTT events:
IotButton2:SINGLE
IotButton2:SINGLEOff
IotButton2:DOUBLE
IotButton2:DOUBLEOff
IotButton2:LONG
IotButton2:LONGOff
Create the DynamoDB table. For my example, the table name is "toggles" with one key field "name" and one boolean field "on". The table has to exist, but if the entry does not, it gets created the first time you click the button or test the Lambda function.
You have to update the Lambda function role to include your DynamoDb permissions. Add the following lines to the policy:
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:xxxxxxxx:table/toggles"
]
}
(Get the ARN from AWS console DynamoDB -> table -> toggles -> Additional information.)
You can also edit the above function to handle multiple buttons, by checking the serial number.

Related

How can I deploy this code on AWS Lambda and generate a csv?

I want to deploy this code in lambda and run it every hour to generate a CSV. How can I do that and what are the steps for it?
#!/usr/bin/env python3
import argparse
import boto3
import datetime
import re
import csv
import random
import pandas as pd
now = datetime.datetime.utcnow()
start = '2022-12-01'
end = '2022-12-20'
str = ' 00:00'
# to use a specific profile e.g. 'dev'
session = boto3.session.Session(profile_name='dev')
cd = session.client('ce', 'eu-west-2')
results = []
token = None
while True:
if token:
kwargs = {'NextPageToken': token}
else:
kwargs = {}
data = cd.get_cost_and_usage(TimePeriod={'Start': start, 'End': end}, Granularity='MONTHLY', Metrics=['UnblendedCost'], Filter={ "And": [ {"Dimensions": {"Key": "SERVICE","Values": ["Amazon Relational Database Service", "Amazon Elastic Compute Cloud - Compute"]}}, {"Tags": {"Key": "Name", "Values": ["qa-mssql"]}}, ]}, GroupBy=[{'Type': 'TAG', 'Key': 'app'}, {'Type': 'TAG', 'Key': 'Name'}], **kwargs)
results += data['ResultsByTime']
token = data.get('NextPageToken')
if not token:
break
def data():
print(','.join(['date', 'teams', 'resource_names', 'Amounts','resource_type' ]))
for result_by_time in results:
for group in result_by_time['Groups']:
amount = group['Metrics']['UnblendedCost']['Amount']
resource_type = 'mssql'
value = print(result_by_time['TimePeriod']['End'].__add__(str),',',','.join(group['Keys']).replace ("app$", "").replace("Name$", "") , ',', amount, ',', resource_type,)
return value
data()
I am pretty new to lambda and want to know the basics step by step approach to it.
Deploying your code to lambda, package it as zip file and deploy it directly to function from console or upload to s3 and then refer the path in lambda.
If your code doesn't require any dependencies which are not part of lambda environment, you can directly edit the code in console and save it.
Add trigger to the lambda with "CloudWatchEvent Schedule"
Refer this aws doc1 and this doc2

Call DRF ViewSet via Celery task

I have a Django Rest Framework ViewSet:
MyModelViewSet(generics.RetrieveUpdateDestroyAPIView):
def perform_destroy(self, instance):
# do something besides deleting the object
Now I'm writing a Celery periodic task that deletes expired objects based on a filter (let's say end_date < now).
I want the task to reuse and perform the same actions that are executed in the ViewSet's perform_destroy method.
Can this be done? How?
Thanks!
You can solve your issue by using Request in DRF, schedule a celery task to call the request. It works well, I've implemented this before.
Example code:
from rest_framework.request import Request as DRFRequest
from django.conf import settings
from django.http import HttpRequest
from your_module.views import MyModelViewSet
CELERY_CACHING_QUEUE = getattr(settings, "CELERY_CACHING_QUEUE", None)
def delete_resource(resource_pk: int) -> None:
"""
This method helps to delete the resource by the id.
"""
print(f'Starting deleting resource {resource_pk}...')
request = HttpRequest()
request.method = 'DELETE'
request.META = {
'SERVER_NAME': settings.ALLOWED_HOSTS[0],
'SERVER_PORT': 443
}
drf_request = DRFRequest(request)
# If your API need user has access permission,
# you should handle for getting the value of
# user_has_access_permission before
# E.g. below
# drf_request.user = user_has_access_permission
try:
view = MyModelViewSet(
kwargs={
'pk': resource_pk
},
request=drf_request
)
view.initial(drf_request)
view.delete(drf_request)
except (Exception, KeyError) as e:
print(f'Cannot delete resource: {resource_pk}, error: {e}')
return
print(f'Finished deleting resource {resource_pk}...')
#task(name="delete_resource_task", queue=CELERY_CACHING_QUEUE)
def delete_resource_task(resource_pk: int) -> None:
"""
Async task helps to delete resource.
"""
delete_resource(resource_pk)

Using SNS as a Target to Trigger Lambda Function

I have a Lambda function is working 100%, i set my Cloudwatch rule and connected the Target to the Lambda directly and everything is working fine.
My manager want me to change the Target in the Cloudwatch and set it to SNS, then use the SNS as a trigger in my Lambda.
I have done the necessary thing and now my Lambda Function is no longer working.
import os, json, boto3
def validate_instance(rec_event):
sns_msg = json.loads(rec_event['Records'][0]['Sns']['Message'])
account_id = sns_msg['account']
event_region = sns_msg['region']
assumedRoleObject = sts_client.assume_role(
RoleArn="arn:aws:iam::{}:role/{}".format(account_id, 'VSC-Admin-Account-Lambda-Execution-Role'),
RoleSessionName="AssumeRoleSession1"
)
credentials = assumedRoleObject['Credentials']
print(credentials)
ec2_client = boto3.client('ec2', event_region, aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
)
def lambda_handler(event, context):
ip_permissions=[]
print("The event log is " + str(event))
# Ensure that we have an event name to evaluate.
if 'detail' not in event or ('detail' in event and 'eventName' not in event['detail']):
return {"Result": "Failure", "Message": "Lambda not triggered by an event"}
elif event['detail']['eventName'] == 'AuthorizeSecurityGroupIngress':
items_ip_permissions = event['detail']['requestParameters']['ipPermissions']['items']
security_group_id=event['detail']['requestParameters']['groupId']
print("The total items are " + str(items_ip_permissions))
for item in items_ip_permissions:
s = [val['cidrIp'] for val in item['ipRanges']['items']]
print("The value of ipranges are " + str(s))
if ((item['fromPort'] == 22 and item['toPort'] == 22) or (item['fromPort'] == 143 and item['toPort'] == 143) or (item['fromPort'] == 3389 and item['toPort'] == 3389)) and ('0.0.0.0/0' in [val['cidrIp'] for val in item['ipRanges']['items']]):
print("Revoking the security rule for the item" + str(item))
ip_permissions.append(item)
result = revoke_security_group_ingress(security_group_id,ip_permissions)
else:
return
def revoke_security_group_ingress(security_group_id,ip_permissions):
print("The security group id is " + str(security_group_id))
print("The ip_permissions value to be revoked is " + str(ip_permissions))
ip_permissions_new=normalize_paramter_names(ip_permissions)
response = boto3.client('ec2').revoke_security_group_ingress(GroupId=security_group_id,IpPermissions=ip_permissions_new)
print("The response of the revoke is " + str(response))
def normalize_paramter_names(ip_items):
# Start building the permissions items list.
new_ip_items = []
# First, build the basic parameter list.
for ip_item in ip_items:
new_ip_item = {
"IpProtocol": ip_item['ipProtocol'],
"FromPort": ip_item['fromPort'],
"ToPort": ip_item['toPort']
}
# CidrIp or CidrIpv6 (IPv4 or IPv6)?
if 'ipv6Ranges' in ip_item and ip_item['ipv6Ranges']:
# This is an IPv6 permission range, so change the key names.
ipv_range_list_name = 'ipv6Ranges'
ipv_address_value = 'cidrIpv6'
ipv_range_list_name_capitalized = 'Ipv6Ranges'
ipv_address_value_capitalized = 'CidrIpv6'
else:
ipv_range_list_name = 'ipRanges'
ipv_address_value = 'cidrIp'
ipv_range_list_name_capitalized = 'IpRanges'
ipv_address_value_capitalized = 'CidrIp'
ip_ranges = []
# Next, build the IP permission list.
for item in ip_item[ipv_range_list_name]['items']:
ip_ranges.append(
{ipv_address_value_capitalized: item[ipv_address_value]}
)
new_ip_item[ipv_range_list_name_capitalized] = ip_ranges
new_ip_items.append(new_ip_item)
return new_ip_items
Assume the permissions are missing causing the invocation failure.
You need to explicitly grant permission for SNS to invoke the Lambda function.
Below is the CLI
aws lambda add-permission --function-name my-function --action lambda:InvokeFunction --statement-id sns-my-topic \
--principal sns.amazonaws.com --source-arn arn:aws:sns:us-east-2:123456789012:my-topic
my-function -> Name of the lambda function
my-topic -> Name of the SNS topic
Reference: https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html

How to find Knowledge base ID (kbid) for QnAMaker?

I am trying to integrate QnAmaker knowledge base with Azure Bot Service.
I am unable to find knowledge base id on QnAMaker portal.
How to find the kbid in QnAPortal?
The Knowledge Base Id can be located in Settings under “Deployment details” in your knowledge base. It is the guid that is nestled between “knowledgebases” and “generateAnswer” in the POST (see image below).
Hope of help!
Hey you can also use python to get this by take a look at the following code.
That is if you wanted to write a program to dynamically get the kb ids.
import http.client, os, urllib.parse, json, time, sys
# Represents the various elements used to create HTTP request path for QnA Maker
operations.
# Replace this with a valid subscription key.
# User host = '<your-resource-name>.cognitiveservices.azure.com'
host = '<your-resource-name>.cognitiveservices.azure.com'
subscription_key = '<QnA-Key>'
get_kb_method = '/qnamaker/v4.0/knowledgebases/'
try:
headers = {
'Ocp-Apim-Subscription-Key': subscription_key,
'Content-Type': 'application/json'
}
conn = http.client.HTTPSConnection(host)
conn.request ("GET", get_kb_method, None, headers)
response = conn.getresponse()
data = response.read().decode("UTF-8")
result = None
if len(data) > 0:
result = json.loads(data)
print
#print(json.dumps(result, sort_keys=True, indent=2))
# Note status code 204 means success.
KB_id = result["knowledgebases"][0]["id"]
print(response.status)
print(KB_id)
except :
print ("Unexpected error:", sys.exc_info()[0])
print ("Unexpected error:", sys.exc_info()[1])

Connection error to Graphenedb hosted on heroku

Hi I am getting Unable to connect to localhost on port 7687 - is the server running? error whenever my python code executing
import os
import json
from urllib.parse import urlparse, urlunparse
from django.shortcuts import render
# Create your views here.
from py2neo import Graph, authenticate
from bottle import get,run,request,response,static_file
from py2neo.packages import neo4j
url = urlparse(os.environ.get("GRAPHENEDB_GOLD_URL"))
url_without_auth = urlunparse((url.scheme, ("{0}:{1}").format(url.hostname, url.port), '', None, None, None))
user = url.username
password = url.password
authenticate(url_without_auth,user, password)
graph = Graph(url_without_auth, bolt = False)
#graph = Graph(password='vjsj56#vb')
#get("/")
def get_index():
return static_file("index.html", root="static")
#get("/graph")
def get_graph(self):
print("i was here" )
print("graph start")
results = graph.run(
"MATCH (m:Movie)<-[:ACTED_IN]-(a:Person) "
"RETURN m.title as movie, collect(a.name) as cast "
"LIMIT {limit}", {"limit": 10})
print("graph run the run")
nodes = []
rels = []
i = 0
for movie, cast in results:
#print("i am here")
nodes.append({"title": movie, "label": "movie"})
target = i
i += 1
for name in cast:
print(name)
actor = {"title": name, "label": "actor"}
try:
source = nodes.index(actor)
except ValueError:
nodes.append(actor)
source = i
i += 1
rels.append({"source": source, "target": target})
return {"nodes": nodes, "links": rels}
#get("/search")
def get_search():
try:
q = request.query["q"]
except KeyError:
return []
else:
results = graph.run(
"MATCH (movie:Movie) "
"WHERE movie.title =~ {title} "
"RETURN movie", {"title": "(?i).*" + q + ".*"})
response.content_type = "application/json"
return json.dumps([{"movie": dict(row["movie"])} for row in results])
#get("/movie/<title>")
def get_movie(title):
results = graph.run(
"MATCH (movie:Movie {title:{title}}) "
"OPTIONAL MATCH (movie)<-[r]-(person:Person) "
"RETURN movie.title as title,"
"collect([person.name, head(split(lower(type(r)),'_')), r.roles]) as cast "
"LIMIT 1", {"title": title})
row = results.next()
return {"title": row["title"],
"cast": [dict(zip(("name", "job", "role"), member)) for member in row["cast"]]}
this code is running fine on my local syatem but giving connection error when deployed on heroku and graphenedb
exception location: /app/.heroku/python/lib/python3.6/site-packages/py2neo/packages/neo4j/v1/connection.py in connect, line 387
I'm Juanjo, from GrapheneDB.
At first glance the code looks fine and the error code points to a wrong URL. It might be a problem with the environment variable. Can you please check your GRAPHENEDB_GOLD_URL variable?
You can do it like this:
$ heroku config:get GRAPHENEDB_GOLD_URL
It should be something like:
http://<user>:<pass>#XXX.graphenedb.com:24789/db/data
(please don't share your URL here)
If your variable is empty, please read more here on retrieving GrapheneDB environment variables.
If that's not your issue, or the problem persists, could you please contact us via the support link on our admin panel? Heroku team will forward the support ticket to us and we'll have all the information related to your database injected into the ticket.
Thanks,
Juanjo

Resources