How to filter map in dynamodb on aws console? - filter

I am have a simple table like below on DybamoDB
What I need:
i am trying to filter tools_type attribute which is MAP Type , i want to filter antivirus of this MAP column, but filter option shows only type as string,number,boolean only...how can i filter only antivirus and its value in below example
Note: I need to do filter on awsdynamodb console
What I tried:

Filtering MAP or LIST in web console is not possible. Please use SDK or REST api instead.
Here is an example of applying a filter on a MAP attribute using Python SDK:
>>> import boto3
>>> from boto3.dynamodb.conditions import Key, Attr
>>> dynamodb = boto3.resource('dynamodb')
>>> table = dynamodb.Table('example-ddb')
>>> data = table.scan(
... FilterExpression=Attr('tools_type.antivirus').eq('yes')
... )
>>> data['Items']
[{'pk': '2', 'tools_type': {'antivirus': 'yes'}}]

Related

AWS Lambda Python boto3 reading from dynamodb table with mulitple attibutes in KeyConditionExpression

basicSongsTable has 'artist' as Partition Key and 'song' as sort key.
I am able to read using Query if I have one artist. But I want to read 2 artists with the following code. It gives vague error saying ""errorMessage": "Syntax error in module 'lambda_function': positional argument follows keyword argument (lambda_function.py, line 17)","
import boto3
import pprint
from pprint import pprint
dynamodbclient = boto3.client('dynamodb')
def lambda_handler(event, context):
response = dynamodbclient.query(
TableName ='basicSongsTable',
KeyConditionExpression='artist = :varartistname1', 'artist =:varartistname2',
ExpressionAttributeValues={
':varartistname1': {'S': 'basam'},
':varartistname2':{'S': 'sree'}
}
)
pprint(response['Items'])
If I give only one keyconditionexpression it works.
KeyConditionExpression='artist = :varartistname1',
ExpressionAttributeValues={
':varartistname1': {'S': 'basam'}
}
Table
As per documentation:
KeyConditionExpression (string) --
The condition that specifies the key values for items to be retrieved
by the Query action.
The condition must perform an equality test on a single partition key
value.
What you are trying to do is, you are trying to perform an equality test on multiple partition key values, which doesn't work.
To do what you want to do, get data for both artists, you will have to either do two queries or do a scan (which I do not recommend).
For other options, I would recommend you to take a look at this answer and its pros and cons.

Use an Ironpython script to filter and pass filter selections between tables

I have two tables in the analysis. I am using the script below to be able to filter table A and pass those filter selections to the matching filter in table B. Table A and B are visualized in a bar chart. I am triggering the code when the value of a document property changes, following instructions here.
I am running into two problems.
1) After the script runs, clicking Reset All Filters results in only table A being displayed in the visualization. Clicking Reset All Filters again fixes the issue.
2)When I add a second filter (commented out in the code below), making a selection in the Type_A or or Type_B filter wipes out the type B data from the visualization. I think the problem is in how IncludeAllValues is being handled, but I don't know how to fix it. Any help will be appreciated.
from Spotfire.Dxp.Application.Filters import *
from Spotfire.Dxp.Application.Visuals import VisualContent
from System import Guid
#Get the active page and filterPanel
page = Application.Document.ActivePageReference
filterPanel = page.FilterPanel
theFilterA = filterPanel.TableGroups[0].GetFilter("Type_A")
lbFilterA = theFilterA.FilterReference.As[ListBoxFilter]()
theFilter2A = filterPanel.TableGroups[1].GetFilter("Type_A")
lb2FilterA = theFilter2A.FilterReference.As[ListBoxFilter]()
lb2FilterA.IncludeAllValues = False
lb2FilterA.SetSelection(lbFilterA.SelectedValues)
#########################Type_B###########################
# theFilterB = filterPanel.TableGroups[0].GetFilter("Type_B")
# lbFilterB = theFilterB.FilterReference.As[ListBoxFilter]()
# theFilter2B = filterPanel.TableGroups[1].GetFilter("Type_B")
# lb2FilterB = theFilter2B.FilterReference.As[ListBoxFilter]()
# lb2FilterB.IncludeAllValues = False
# lb2FilterB.SetSelection(lbFilterB.SelectedValues)

Django rest framework mongoengine update new field with default value

I'm using Django rest framework mongoengine after created few documents, if i want add a new field with default value. Is there any way to do that orelse i need to update with few custom function.
Note: I want to fetch the data with filter having a new field name. That time the field is not there. So i'm getting empty.
From what I understand, you are modifying a MongoEngine model (adding a field with a default value) after documents were inserted. And you are having issue when filtering your collection on that new field.
Basically you have the following confusing situation:
from mongoengine import *
conn = connect()
conn.test.test_person.insert({'age': 5}) # Simulate an old object
class TestPerson(Document):
name = StringField(default='John') # the new field
age = IntField()
person = TestPerson.objects().first()
assert person.name == "John"
assert Test.objects(name='John').count() == 0
In fact, MongoEngine dynamically applies the default value when the field of the underlying pymongo document is empty but it doesn't account for that when filtering.
The only reliable way to guarantee that filtering will work is to migrate your existing documents.
If its only adding a field with a default value, you could do this with MongoEngine: TestPerson.objects().update(name='John')
If you did more important/complicated changes to your document structure, then the best option is to get down to pymongo.
coll = TestPerson._get_collection()
coll.update({}, {'$set': {'name': 'John'}})

access individual fields using elastic search dsl in python

Is the below accurate or should it be something else ?
I am getting the expected results just checking if this is the most efficient way to access individual (nested) fields.
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search, Q
import json
client = Elasticsearch('my_server')
policy_number = 'POLICY1234'
s = Search(using=client, index = "my_index").query("term",policyNumber=policy_number.lower())
es_response = s.execute()
for hits in es_response:
print hits['policyNumber']
print hits.party[0]['fullName']
print hits.party[0].partyAddress[0]['address1']
print hits.party[0].partyAddress[0]['city']
print hits.party[0].phoneList[0]['phoneNumber']
You don't need to call execute manually and you don't have to use [] to access fields by name, you can just use the attribute access:
for hit in s:
print hit.policyNumber
print hit.party[0].fullName
print hit.party[0].partyAddress[0].address1
print hit.party[0].partyAddress[0].city
print hit.party[0].phoneList[0].phoneNumber

Filter by UUID in Pig

I have a list of known UUIDs. I want to do a FILTER in Pig that filters out records whose id column do not contain a UUID from my list.
I have yet to find a way to specify bytearray literals such that I can write that filter statement.
How do I filter by UUID?
(in one attempt I tried using https://github.com/cevaris/pig-dse per How to FILTER Cassandra TimeUUID/UUID in Pig thinking I could filter by a chararray literal of the UUID but I got
grunt> post_creators= LOAD 'cql://mykeyspace/mycf/' using AbstractCassandraStorage;
2014-10-09 14:56:05,597 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: could not instantiate 'AbstractCassandraStorage' with arguments 'null'
)
Use this python UDF
import array
import uuid
#outputSchema("uuid:bytearray")
def to_bytes(uuid_str):
return array.array('b', uuid.UUID(uuid_str).bytes)
Filter like this:
users = FILTER users by user_id == my_udf.to_bytes('dd2e03a7-7d3d-45b9-b902-2b39c5c541b5');

Resources