csv-table reference as table number rather than table name - python-sphinx

I am making a document with many tables. But, the only way I know how to refer to them in the text is with a :ref:'my_table' and that puts the entire table name, "The long title of my table" in my text block. I would rather have it put in "Table 1.3" instead ... is that possible?
.. _my_table:
.. csv-table:: The long title of my table
:file: my_table.csv
:header-rows: 1
A separate but table related questions: is there a sphinx/docutils option for putting the table name/caption at the bottom rather than the top or is that a stylesheet/latex thing?

There's probably not a super-easy way to do this, but the numfig extension (as referenced in this question) does something very like this for Figures, and could probably be modified/adapted to do something similar for Tables.
A custom extension like this is probably the only way to accomplish this at present.

Related

Does Spotfire provide any simple way of creating an "Other" category to group entries within a filter?

Right now, I am using a filtering scheme which only looks at the data of the 5 or 6 most common entries in the 'Clinic' field. But, there are a handful of other possibilities which might account for a few rows each. They are too inconsequential to include on their own (I am using pie charts and bar charts), but I would like these rows to be accounted for. For this reason, I would like to create an "Other" category which groups these entries together. What is the best way of doing this? I know I can create a calculated column that groups everything aside from the top 5 or 6 in an other category, but I thought there might be a way to keep working with the original column and achieve the same result.
Unfortunately not. In 6.5.x you will have to write a case statement that will specify everything that is not most common to other.
In 7.0.x you can go to insert binned column. Add the bottom you can use values to create a bin. Add the values you want to the bin and call them "Other". Of course if you look at the column created like this, it is a case statement. But it is a whole lot faster than writing it yourself.
Following phiver I came out with this solution in Spotfire 6.5.2:
Add a calculated column
With something like this:
If(DenseRank(Count([Formation]) OVER ([Formation]),"desc")<10, DenseRank(Count([Formation]) OVER ([Formation]),"desc") & " " & [Formation], "10 Other")
Hope that helps.

Is it possible to reverse a column transformation in Spotfire, and if not, what are the alternatives?

I've made the mistake of using the 'Calculate and Replace Column' feature to replace the wrong column, and realized after the fact. The column I replaced corresponds to last names and is important. I would like to retrieve this column but maintain my other 15 or so data transformations. Ideally, I would like to remove this transformation, but I've come up empty so far. Here's what I've tried:
I tried adding the 'last name' column again from the same external source, using >Insert >Columns... I also tried renaming this column to avoid the data transformation. Unfortunately, this resulted in an entirely empty column, so it did not successfully match to the table or was affected by the transformation..
I checked the source information, and found exactly the 3-4 lines that I wish were not there. I thought it might be possible to edit this but haven't found a way. This seems like it would be the easiest.
Another idea I had was I could replace the data table with the same source, and repeat all of the transformations from the replace data table dialogue (excluding the bad one). This is my next plan of attack, but I figured I would come on here to see if there's an easier way first.
Thanks in advance!
Good News for YOU!!! #jeremyVollen.
It is possible to 'edit' your transformation per Tibco article 44098.
Resolution: If there are more then one transformations on a data table and you need to edit any of those transformation, follow the steps below:
Go To Edit >> Data Table Properties.
Select the desired data table inside which the transformation has been added and click on Refresh Data > With Prompt.
A new window will pop up which will allow you to make the desired changes in each of the transformations.
unfortunately it is NOT possible to reverse data table transformations.
it IS possible to undo the transformations with Edit>>Undo or CTRL+Z, but that's as far as it goes.
my strategy for dealing with this is (in accordance with your #3) to visit Edit>>Data Table Properties, select the table I'm interested in, select Source Information, then copy the contents of the textarea and paste it into notepad. then, I'll File>>Replace Data Table and start over from the beginning while keeping the notepad open so I don't miss any steps.
I realize it's not ideal, but there is unfortunately not another way.

Qlikview: Matching columns of two indirectly link tables does not work

Following is the data model of the dashboard I am facing problem in:
http://blob:http://stackoverflow.com/f3e40cfe-e009-4d03-bcf5-b7b4305c18c4
Now, what i want to achieve is that in Case there is a filed named Manufacturing_Date. And in MWODefetcs there is a field named Defect_Date. What i want is that when ever a record is selected from a table containing cases from Case corresponding records are shown in another table based on the exact match of Manufacturing_Date=Defect_Date.
As simple as it sounds, i can not seem to accomplish it. I have tried the following expressions to no avail:
Count({<[Defect_Date_text]=p([Manu_text]),FaultID=,DEFECT_CODE=>}MFG_BARCODE_NUM)
sum({$<Defect_Date ={"=$(Manufacturing_Date__c)"}>}Defect_Date)
Do the 2 tables need to be directly linked. Is it the intermediary iFaults table that is preventing me to accomplish it?
Please help.
you should use the P() set expression like this:
sum({$<Defect_Date =P(Manufacturing_Date__c) >}Defect_Date)

csv-table formatting through preamble?

Try as I might, I cannot figure out how to change the default table format in the pdf output from sphinx.
I could edit the .tex file, or the writer.py source code... but both of those seem like bad options.
Is there any thing that can be passed to the preamble to accomplish that?
Depending on what you are trying to accomplish by changing the table format. For instance if you want to define row colors and change the tables accordingly across the document you can use both the xcolor package and redefine how tabular handles that at the point of definition by changing the tabular environment.
So in the preamble you would do
\usepackage[table]{xcolor}
\definecolor{foo}{RGB}{236,137,29}
\definecolor{bar}{RGB}{232,108,31}
\let\newtabular\tabular
\let\newendtabular\endtabular
\renewenvironment{tabular}{\rowcolors{2}{foo}{bar}\newtabular}{\newendtabular}
This will overwrite the default tabular environment and apply the foo and bar row colors throughout the document, starting at the second row.
For having more directives related to tables. You should take a look at sphinxtr
Jeff Terrace has some great extensions included, but the two main ones to use are numfig and figtable. You can wrap a csv table into figtable.
.. figtable::
:label: my-csv-label
:caption: My CSV Table
:nofig:
.. csv-table::
:file: data/foo.csv
:header-rows: 1
Changing the standard table format with the caption below instead of above. Then you also have the added benefit of being able to directly link to that table by using :num:.
:num:`Table #my-csv-label`
It will automatically number accordingly, without referencing the label name. You can also use
.. figtable::
:spec: {r l r l}
To better define how you want your table to appear.

Is it possible to traverse rowtype fields in Oracle?

Say i have something like this:
somerecord SOMETABLE%ROWTYPE;
Is it possible to access the fields of somerecord with out knowing the fields names?
Something like somerecord[i] such that the order of fields would be the same as the column order in the table?
I have seen a few examples using dynamic sql but i was wondering if there is a cleaner way of doing this.
What i am trying to do is generate/get the DML (insert query) for a specific row in my table but i havent been able to find anything on this.
If there is another way of doing this i'd be happy to use but would also be very curious in knowing how to do the former part of this question - it's more versatile.
Thanks
This doesn't exactly answer the question you asked, but might get you the result you want...
You can query the USER_TAB_COLUMNS view (or the other similar *_TAB_COLUMN views) to get information like the column name (COLUMN_NAME), position (COLUMN_ID), and data type (DATA_TYPE) on the columns in a table (or a view) that you might use to generate DML.
You would still need to use dynamic SQL to execute the generated DML (or at least generate static SQL separately).
However, this approach won't work for identifying the columns in an arbitrary query (unless you create a view of it). If you need that, you might need to resort to DBMS_SQL (or other tools).
Hope this helps.
As far as I know there is no clean way of referencing record fields by their index.
However, if you have a lot of different kinds of updates of the same table each with its own column set to update, you might want to avoid dynamic sql and look in the direction of statically populating your record with values, and then issuing update someTable set row = someTableRecord where someTable.id = someTableRecord.id;.
This approach has it's own drawbacks, like, issuing an update to every, even unchanged column, and thus creating additional redo log data, but I believe it should be considered.

Resources