Use an Ironpython script to filter and pass filter selections between tables - filter

I have two tables in the analysis. I am using the script below to be able to filter table A and pass those filter selections to the matching filter in table B. Table A and B are visualized in a bar chart. I am triggering the code when the value of a document property changes, following instructions here.
I am running into two problems.
1) After the script runs, clicking Reset All Filters results in only table A being displayed in the visualization. Clicking Reset All Filters again fixes the issue.
2)When I add a second filter (commented out in the code below), making a selection in the Type_A or or Type_B filter wipes out the type B data from the visualization. I think the problem is in how IncludeAllValues is being handled, but I don't know how to fix it. Any help will be appreciated.
from Spotfire.Dxp.Application.Filters import *
from Spotfire.Dxp.Application.Visuals import VisualContent
from System import Guid
#Get the active page and filterPanel
page = Application.Document.ActivePageReference
filterPanel = page.FilterPanel
theFilterA = filterPanel.TableGroups[0].GetFilter("Type_A")
lbFilterA = theFilterA.FilterReference.As[ListBoxFilter]()
theFilter2A = filterPanel.TableGroups[1].GetFilter("Type_A")
lb2FilterA = theFilter2A.FilterReference.As[ListBoxFilter]()
lb2FilterA.IncludeAllValues = False
lb2FilterA.SetSelection(lbFilterA.SelectedValues)
#########################Type_B###########################
# theFilterB = filterPanel.TableGroups[0].GetFilter("Type_B")
# lbFilterB = theFilterB.FilterReference.As[ListBoxFilter]()
# theFilter2B = filterPanel.TableGroups[1].GetFilter("Type_B")
# lb2FilterB = theFilter2B.FilterReference.As[ListBoxFilter]()
# lb2FilterB.IncludeAllValues = False
# lb2FilterB.SetSelection(lbFilterB.SelectedValues)

Related

Disappearing column filter value

In Power Query Editor, I have a table I want to filter on a specific column. When I click on the arrow on the column header, it first gives me following items:
When I click "Load More", the first entry "100R1" is not available anymore? I also know there should be other values (like "500", but those are also not shown)...
This behaviour starts only after I do a NestedJoin like so:
= Table.NestedJoin(Source,{"Number"},Parts,{"Parts"},"Parts",JoinKind.Inner)
So, the column that I join on is Number, the column I want to filter on is Type ...
When I try to filter Type on the Source table, it behavious correctly...
How is this possible?
PS: If I adjust the filter manually from:
Table.SelectRows(JoinedTable, each ([Type] = "100R2" or [Type] = "400R1" or [Type] = "400R2"))
to
Table.SelectRows(JoinedTable, each ([Type] = "100R2" or [Type] = "400R1" or [Type] = "400R2" or [Type] = "100R1"))
it effectively keeps instances of "100R1" ...
Once I've faced situation, when filters in PQ are lied to me. The problem was solved by clearing cash.

Why does my CursorPagination class always return the same previous link?

Trying to paginate a large queryset so I can return to the same position I was in previously even if data has been added to the database.
Currently I have as my pagination class:
from rest_framework.pagination import CursorPagination
class MessageCursorPagination(CursorPagination):
page_size = 25
ordering = '-date'
In my View I have:
from rest_framework.generics import GenericAPIView
from rest_framework.authentication import TokenAuthentication, BasicAuthentication
class MessageViewSet(GenericAPIView):
permission_classes = (IsAuthenticated, )
authentication_classes = (TokenAuthentication,)
pagination_class = pagination.MessageCursorPagination
serializer_class = serializers.MessageSerializer
def get(self, request, **kwargs):
account_id = kwargs.get('account_id', None)
messages = models.Message.objects.filter(
account=account_id)
paginated_messages = self.paginate_queryset(messages)
results = self.serializer_class(paginated_messages, many=True).data
response = self.get_paginated_response(results)
return response
While testing to see if I'd set it up right, I got the results I was expecting with a next link and a null for the previous link.
After going to the next link I get a new next link, the next set of results, and a previous link.
When I continue to the next link I get the same previous link as before, but with the next, next link and the next set of data.
No matter how many times I go to the next, next link the previous link remains the same.
Why doesn't the previous link update?
-- Update --
It looks like the cause to my issue is that I have a lot of messages on the same date. Ordering by date it tries to step back to the date before the current cursor. How can I order by date but step through the list using the cursor pagination like I would with ids?
From the Documentation
Proper usage of cursor pagination should have an ordering field that satisfies the following:
Should be an unchanging value, such as a timestamp, slug, or other field that is only set once, on creation.
Should be unique, or nearly unique. Millisecond precision timestamps are a good example. This implementation of cursor pagination uses a smart "position plus offset" style that allows it to properly support not-strictly-unique values as the ordering.
Should be a non-nullable value that can be coerced to a string.

django-import-export how to merge/append instead of update for specific fields

I was able to get access to the new row data and existing instance by overriding import_obj.
def import_obj(self, instance, row, dry_run):
super(RelationshipResource, self).import_obj(instance, row, dry_run)
for field in self.get_fields():
if isinstance(field.widget, widgets.ManyToManyWidget):
tags = []
for tag in instance.tagtag.all():
tags.append(tag.name)
tags.extend(row['tagtag'].split(',')) # concat existing and new tagtag list
row['tagtag'] = ', '.join(tags) #set as new import value
# continue to save_m2m
continue
self.import_field(field, instance, row)
However, some where else in the import workflow it compares the values. Since the new concat value contains the original value the field is not updated. Import thinks there is no change.
How can i save the instance with the full concat values?

Check if data already exists before inserting into BigQuery table (using Python)

I am setting up a daily cron job that appends a row to BigQuery table (using Python), however, duplicate data is being inserted. I have searched online and I know that there is a way to manually remove duplicate data, but I wanted to see if I could avoid this duplication in the first place.
Is there a way to check a BigQuery table to see if a data record already exists first in order to avoid inserting duplicate data? Thanks.
CODE SNIPPET:
import webapp2
import logging
from googleapiclient import discovery
from oath2client.client import GoogleCredentials
PROJECT_ID = 'foo'
DATASET_ID = 'bar'
TABLE_ID = 'foo_bar_table’
class UpdateTableHandler(webapp2.RequestHandler):
def get(self):
credentials = GoogleCredentials.get_application_default()
service = discovery.build('bigquery', 'v2', credentials=credentials)
try:
the_fruits = Stuff.query(Stuff.fruitTotal >= 5).filter(Stuff.fruitColor == 'orange').fetch();
for fruit in the_fruits:
#some code here
basket = dict()
basket['id'] = fruit.fruitId
basket['Total'] = fruit.fruitTotal
basket['PrimaryVitamin'] = fruit.fruitVitamin
basket['SafeRaw'] = fruit.fruitEdibleRaw
basket['Color'] = fruit.fruitColor
basket['Country'] = fruit.fruitCountry
body = {
'rows': [
{
'json': basket,
'insertId': str(uuid.uuid4())
}
]
}
response = bigquery_service.tabledata().insertAll(projectId=PROJECT_ID,
datasetId=DATASET_ID,
tableId=TABLE_ID,
body=body).execute(num_retries=5)
logging.info(response)
except Exception, e:
logging.error(e)
app = webapp2.WSGIApplication([
('/update_table', UpdateTableHandler),
], debug=True)
The only way to test whether the data already exists is to run a query.
If you have lots of data in the table, that query could be expensive, so in most cases we suggest you go ahead and insert the duplicate, and then merge duplicates later on.
As Zig Mandel suggests in a comment, you can query over a date partition if you know the date when you expect to see the record, but that may still be expensive compared to inserting and removing duplicates.

How to load initial data into ModelMultipleChoiceField - Django 1.6

I'm creating my first site using Django and having trouble loading initial data into a ModelForm linked to the built in Django group table.
Right now, users can go to a group page and select from an array of groups they would like to join. When they return to the page later, they see the same list of options/checkboxes again with no indication of which groups they already belong to.
I can't figure out how to have the intial group data load into the form, such that if you are already a member of "group 1" for example, that checkbox is already checked. I would also like to have it so that you could uncheck a box, and so when you submit the form you could be leaving some groups and joining others at the same time. Any help appreciated! My code below:
class GroupForm(ModelForm):
groupOptions = forms.ModelMultipleChoiceField(queryset=Group.objects.all(), label = "Choose your groups",
widget=forms.CheckboxSelectMultiple())
class Meta:
model = Group
fields = ['groupOptions']
def groupSelect(request):
if request.method == 'POST':
form = GroupForm (request.POST)
if form.is_valid():
group = form.cleaned_data['groupOptions']
request.user.groups = group
return render (request, 'groups/groupSelect.html' , {'form':form})
else:
form = GroupForm()
return render (request, 'groups/groupSelect.html' , {'form':form})
Took me a few days and some trial and error, but figured this one out on my own. Just needed to modify the second to last line in the code above. The ModelForm loads all available groups as options, and the line below causes the groups the user already belongs to to be checked.
form = GroupForm(initial={ 'groupOptions': request.user.groups.all() })

Resources