How to get handle of a known data table - datatable

I am hoping to figure out the answer of a seemingly simple question. Below I was trying to get the handle of a known table, by the name of "TableMe". Being able to print its name back on the screen would prove that I have got the handle correctly.
from Spotfire.Dxp.Data import *
from Spotfire.Dxp.Application import *
# Trial #1
#dataTable = Document.Data.Tables["TableMe"]
# Trial #2
dataTable = Document.ActiveDataTableReference
print dataTable.Title
Both my Trial #1 and #2 had failed, for different reasons:
Trial #1:
AttributeError: 'getset_descriptor' object has no attribute 'Tables'
Trial #2:
AttributeError: 'getset_descriptor' object has no attribute 'Title'
I feel that this must be a simple question for any fluent IronPython programmers. Can someone shed a light or two pls?

you don't need to import anything to access data tables:
for table in Document.Data.Tables:
print table.Name
print table.Id
print table.RowCount
print "---"
then to access a specific table:
table = Document.Data.Tables["TableMe"]
...or if you have the ID:
tID = "abc123"
table = Document.Data.Tables[tID]
...or by index (refer to the Data Table Properties dialog in Spotfire for the order, make sure to start at zero):
table = Document.Data.Tables[0]

Related

How can I sum one column from the same table, to produce three different aggregates, using Sequel ORM?

My query is this:
DB[:expense_projects___p].where(:project_company_id=>user_company_id).
left_join(:expense_items___i, :expense_project_id=>:project_id).
select_group(:p__project_name, :p__project_id).
select_more{count(:i__item_id)}.
select_more{sum(:i__amount)}.to_a.to_json
which works.
However, payment methods include cash, card and invoice. So I would like to sum each of those for summary purposes to achieve a discrete total for payments by cash, card, and invoice repsectively. I included the following line into the query
select_more{sum(:i__amount).where(:i__mop => 'card')}.
and the error message was
NoMethodError - undefined method `where' for #<Sequel::SQL::Function:0x007fddd88b5ed0>:
so I created the dataset separately with
ds1 = expense_items.where(:mop=>'card', :expense_company_id=>user_company_id).sum(:amount)
and appended it, at the end of the original query, with
select_append{ds1}
which achieved partial success as the returned json is now:
{"project_name":"project 2","project_id":2,"count":4,"sum":"0.40501E3","?column?":"0.2381E2"}
as can be seen there is no name for this element which I need in order to reference it in my getJSON call. I tried to add an identifier by adding ___a
to the ds1 query as below
ds1 = expense_items.where(:mop=>'card', :expense_company_id=>user_company_id).sum(:amount___a)
but that failed.
In summary, is this the right approach and, in any case, how can I provide an identifier when doing a sequel sum query? In other words sum(:a_column).as(a_name)
Many thanks.
Dataset#sum returns the sum, not a modified dataset. You probably want something like:
ds1 = expense_items.where(:mop=>'card', :expense_company_id=>user_company_id).select{sum(:amount)}
select_append{ds1.as(:sum)}
I am not sure for approach ( better ask Jeremy Evans ) but it work.
You just change: .sum(... to .select_more{:amount___a).as(:desired_name)}
ds1 = expense_items.where(:mop=>'card', :expense_company_id=>user_company_id).select_more{:amount___a).as(:desired_name)}
and actually get that desired_name in db response.

BigQuery - Check if table already exists

I have a dataset in BigQuery. This dataset contains multiple tables.
I am doing the following steps programmatically using the BigQuery API:
Querying the tables in the dataset - Since my response is too large, I am enabling allowLargeResults parameter and diverting my response to a destination table.
I am then exporting the data from the destination table to a GCS bucket.
Requirements:
Suppose my process fails at Step 2, I would like to re-run this step.
But before I re-run, I would like to check/verify that the specific destination table named 'xyz' already exists in the dataset.
If it exists, I would like to re-run step 2.
If it does not exist, I would like to do foo.
How can I do this?
Thanks in advance.
Alex F's solution works on v0.27, but will not work on later versions. In order to migrate to v0.28+, the below solution will work.
from google.cloud import bigquery
project_nm = 'gc_project_nm'
dataset_nm = 'ds_nm'
table_nm = 'tbl_nm'
client = bigquery.Client(project_nm)
dataset = client.dataset(dataset_nm)
table_ref = dataset.table(table_nm)
def if_tbl_exists(client, table_ref):
from google.cloud.exceptions import NotFound
try:
client.get_table(table_ref)
return True
except NotFound:
return False
if_tbl_exists(client, table_ref)
Here is a python snippet that will tell whether a table exists (deleting it in the process--careful!):
def doesTableExist(project_id, dataset_id, table_id):
bq.tables().delete(
projectId=project_id,
datasetId=dataset_id,
tableId=table_id).execute()
return False
Alternately, if you'd prefer not deleting the table in the process, you could try:
def doesTableExist(project_id, dataset_id, table_id):
try:
bq.tables().get(
projectId=project_id,
datasetId=dataset_id,
tableId=table_id).execute()
return True
except HttpError, err
if err.resp.status <> 404:
raise
return False
If you want to know where bq came from, you can call build_bq_client from here: http://code.google.com/p/bigquery-e2e/source/browse/samples/ch12/auth.py
In general, if you're using this to test whether you should run a job that will modify the table, it can be a good idea to just do the job anyway, and use WRITE_TRUNCATE as a write disposition.
Another approach can be to create a predictable job id, and retry the job with that id. If the job already exists, the job already ran (you might want to double check to make sure the job didn't fail, however).
Enjoy:
def doesTableExist(bigquery, project_id, dataset_id, table_id):
try:
bigquery.tables().get(
projectId=project_id,
datasetId=dataset_id,
tableId=table_id).execute()
return True
except Exception as err:
if err.resp.status != 404:
raise
return False
There is an edit in exception.
you can use exists() now to check if dataset exists same with table
BigQuery exist documentation
recently big query introduced so called scripting statements that can be quite a game changer for some flows.
check them out here:
https://cloud.google.com/bigquery/docs/reference/standard-sql/scripting
Now for example to check if table exists you can use something like this:
sql = """
BEGIN
IF EXISTS(SELECT 1 from `YOUR_PROJECT.YOUR_DATASET.YOUR_TABLE) THEN
SELECT 'table_found';
END IF;
EXCEPTION WHEN ERROR THEN
# you can print your own message like above or return error message
# however google says not to rely on error message structure as it may change
select ##error.message;
END;
"""
With my_bigquery being an instance of class google.cloud.bigquery.Client (already authentified and associated to a project):
my_bigquery.dataset(dataset_name).table(table_name).exists() # returns boolean
It does an API call to test for the existence of the table via a GET request
Source: https://googlecloudplatform.github.io/google-cloud-python/0.24.0/bigquery-table.html#google.cloud.bigquery.table.Table.exists
It works for me using 0.27 of the Google Bigquery Python module
Inline SQL Alternative
tarheel's answer is probably the most correct at this point in time
but I was considering the comment from Ivan above that "404 could also mean the resource is not there for a bunch of reasons", so here is a solution that should always successfully run a metadata query and return a result.
It's not the fastest, because it always has to run the query, bigquery has overhead for small queries
A trick I've seen previously is to query information_schema for a (table) object, and union that to a fake query that ensures a record is always returned even if the the object doesn't. There's also a LIMIT 1 and an ordering to ensure the single record returned represents the table, if it does exist. See the SQL in the code below.
In spite of doc claims that Bigquery standard SQL is ISO compliant, they don't support information_schema, but they do have __table_summary__
dataset is required because you can't query __table_summary__ without specifying dataset
dataset is not a parameter in the SQL because you can't parameterize object names without sql injection issues (apart from with the magical _TABLE_SUFFIX, see https://cloud.google.com/bigquery/docs/querying-wildcard-tables )
#!/usr/bin/env python
"""
Inline SQL way to check a table exists in Bigquery
e.g.
print(table_exists(dataset_name='<dataset_goes_here>', table_name='<real_table_name'))
True
print(table_exists(dataset_name='<dataset_goes_here>', table_name='imaginary_table_name'))
False
"""
from __future__ import print_function
from google.cloud import bigquery
def table_exists(dataset_name, table_name):
client = bigquery.Client()
query = """
SELECT table_exists FROM
(
SELECT true as table_exists, 1 as ordering
FROM __TABLES_SUMMARY__ WHERE table_id = #table_name
UNION ALL
SELECT false as table_exists, 2 as ordering
) ORDER by ordering LIMIT 1"""
query_params = [bigquery.ScalarQueryParameter('table_name', 'STRING', table_name)]
job_config = bigquery.QueryJobConfig()
job_config.query_parameters = query_params
if dataset_name is not None:
dataset_ref = client.dataset(dataset_name)
job_config.default_dataset = dataset_ref
query_job = client.query(
query,
job_config=job_config
)
results = query_job.result()
for row in results:
# There is only one row because LIMIT 1 in the SQL
return row.table_exists

Getting value of database field in Crystal Reports

Currently I'm working on legacy application, that uses crystal report engine. I have to get value of database fields programmatically. As I've assumed, I need proper event for getting next code to work:
Report.Database.Tables(1).Fields(1).Value
But the value is always empty in DownloadStarted/Finished event handlers. What I'm doing wrong and is it at least possible?
I think that if you want to get value of your table fields in program the best way is that you get the field name from report and then connect to your table directly and use report field names as the table columns name
i do it in c# i hope it can help you in vb6 too:
string name = report2.Database.Tables[1].Fields[1].Name;
string[] names = name.Split('.');
and then add your database to your program and use names like this:
DataTable dt = new DataTable();
string[] value = dt.Columns[names[1]];
if you just need your tables values, you can use my last answer, but if you need value of database fields in crystal report, i mean something like formula field ,this code can help you:
CRAXDRT.FormulaFieldDefinitions definitions = report2.FormulaFields;
string formulaText = "IF " + report2.Database.Tables[1].Fields[3].Name
+ " > 10 THEN" + report2.Database.Tables[1].Fields[2].Name;
definitions.Add("Test", formulaText);
report2.Sections[1].AddFieldObject(definitions[1], 0, 0);

With the use of Apex code, how can I select all ''OpportunityProducts' objects which are belongs to particular 'Opportunity Id'

I am way to customizing 'Sales' application that belongs to 'salesforce.com' platform.
Is there any way to select all the 'OpportunityProducts' objects which are belongs to particular 'Opportunity Id' ?
[SELECT Id FROM OpportunityProduct WHERE Opportunity =:opportunitId];
When I execute above code for select those 'OpportunityProduct', I got following error. If any one have some idea please update me. Thanks.
Save error: sObject type 'OpportunityProduct' is not supported. If you are attempting to use a custom object, be sure to append the '__c' after the entity name. Please reference your WSDL or the describe call for the appropriate names.
Another way to get this done when you need the actual products, not just the line items, is as follows. First get your opportunities:
List<Opportunity> opps = [SELECT Id, Name FROM Opportunity LIMIT 1000];
Then loop through to create a list of opportunity Ids
List<Id> oppIds = new List<Id>();
for(Opportunity o : opps)
{
oppIds.add(o.Id);
}
Now get your actual products that belong to your opportunities...
List<OpportunityLineItem> oppProds = [SELECT Id, PricebookEntry.Product2.Name, PricebookEntry.Product2.Family
FROM OpportunityLineItem
WHERE OpportunityId IN :oppIds];
Hope that helps.

What is the best way pre filter user access for sqlalchemy queries?

I have been looking at the sqlalchemy recipes on their wiki, but don't know which one is best to implement what I am trying to do.
Every row on in my tables have an user_id associated with it. Right now, for every query, I queried by the id of the user that's currently logged in, then query by the criteria I am interested in. My concern is that the developers might forget to add this filter to the query (a huge security risk). Therefore, I would like to set a global filter based on the current user's admin rights to filter what the logged in user could see.
Appreciate your help. Thanks.
Below is simplified redefined query constructor to filter all model queries (including relations). You can pass it to as query_cls parameter to sessionmaker. User ID parameter don't need to be global as far as session is constructed when it's already available.
class HackedQuery(Query):
def get(self, ident):
# Use default implementation when there is no condition
if not self._criterion:
return Query.get(self, ident)
# Copied from Query implementation with some changes.
if hasattr(ident, '__composite_values__'):
ident = ident.__composite_values__()
mapper = self._only_mapper_zero(
"get() can only be used against a single mapped class.")
key = mapper.identity_key_from_primary_key(ident)
if ident is None:
if key is not None:
ident = key[1]
else:
from sqlalchemy import util
ident = util.to_list(ident)
if ident is not None:
columns = list(mapper.primary_key)
if len(columns)!=len(ident):
raise TypeError("Number of values doen't match number "
'of columns in primary key')
params = {}
for column, value in zip(columns, ident):
params[column.key] = value
return self.filter_by(**params).first()
def QueryPublic(entities, session=None):
# It's not directly related to the problem, but is useful too.
query = HackedQuery(entities, session).with_polymorphic('*')
# Version for several entities needs thorough testing, so we
# don't use it yet.
assert len(entities)==1, entities
cls = _class_to_mapper(entities[0]).class_
public_condition = getattr(cls, 'public_condition', None)
if public_condition is not None:
query = query.filter(public_condition)
return query
It works for single model queries only, and there is a lot of work to make it suitable for other cases. I'd like to see an elaborated version since it's MUST HAVE functionality for most web applications. It uses fixed condition stored in each model class, so you have to modify it to your needs.
Here is a very naive implementation that assumes there is the attribute/property self.current_user logged in user has stored.
class YourBaseRequestHandler(object):
#property
def current_user(self):
"""The current user logged in."""
pass
def query(self, session, entities):
"""Use this method instead of :method:`Session.query()
<sqlalchemy.orm.session.Session.query>`.
"""
return session.query(entities).filter_by(user_id=self.current_user.id)
I wrote an SQLAlchemy extension that I think does what you are describing: https://github.com/mwhite/multialchemy
It does this by proxying changes to the Query._from_obj and QueryContext._froms properties, which is where the tables to select from ultimately get set.

Resources