Oracle APEX export PDF file name - oracle

I am exporting a PDF report from Oracle APEX using a Report Query defined in Shared Components. By default, the file name for the generated PDF is the Report Query name. Is there a way to customize the name? I need to include a timestamp in it, however I cannot find any solution. I am not using any external tool for report generation, and the layout is defined in XSL-FO.
Thanks for your help.

Create a PL/SQL process and use the following API:
APEX_UTIL.DOWNLOAD_PRINT_DOCUMENT (
p_file_name => 'myreport123',
p_content_disposition => 'attachment',
p_application_id => :APP_ID,
p_report_query_name => 'report1',
p_report_layout_name => 'report1',
p_report_layout_type => 'rtf',
p_document_format => 'pdf');
The above code assumes that your Report Query Name and Report Layout Name are report1
Note that you can change p_file_name to whatever you like.

If you are using XSLT 2.0 or XSLT 3.0, you can get the current time (not the modification time of the source document) using currentDateTime(). See https://www.w3.org/TR/xpath-functions/#func-current-dateTime
If you are still using XSLT 1.0, your XSLT processor might implement the EXSLT extensions for date and time. See http://exslt.org/date/index.html

Related

Laravel Firebird Error "invalid request BLR at offset"

I'm trying to connect to .fdb using package in my web-app
settings:
'firebird' => [
'driver' => 'firebird',
'host' => 'localhost',
'port' => '3050',
'database' => storage_path('db.fdb'),
'username' => 'SYSDBA',
'password' => 'masterkey',
'charset' => 'UTF8',
'version' => '2.5', // 1.5 - same
'role' => null,
'UdfAccess' => 'Full', // try like DBeaver connection
'isc_dpb_no_db_triggers' => true, // try like DBeaver connection
],
controller:
$fb = DB::connection('firebird')->table('table')->count();
result:
SQLSTATE[HY000] [335544343] invalid request BLR at offset 132 (SQL: select count(*) as "aggregate" from "TABLE").
Full error is
invalid request BLR at offset 132 function POS is not defined module name or entrypoint could not be found
What is my problem? How to ignore UDF or turn it on?
Win 10, x64, have Firebird 2.5 and 3.
Have the same problem in C# app.
The problem is that your database has a UDF definition (POS), but cannot find or load the UDF library (or the library doesn't contain the function entrypoint). This can happen if you moved your database to a different server, but forgot to move/install the accompanying UDF libraries, or if you're using a different version of the UDF library (one that doesn't contain the function entrypoint).
There are basically two ways to recover from this:
Make sure your database server has the appropriate UDF library
Remove any usage of the UDF from your database.
Option 1
Your POS UDF uses the DMM_UDF library (which doesn't sounds familiar to me, so it is likely a custom UDF library) with the ibPos entry point. On windows, this library would generally be called DMM_UDF.dll or dmm_udf.dll.
Find the right UDF DLL file (e.g. on the old database server), and make sure it has the same bitness as your Firebird server. If you only have a 32-bit version, replace your 64-bit Firebird server with a 32-bit Firebird server.
You need to add this DLL to the udf directory of your Firebird installation, and make sure the UDF directory is enabled in firebird.conf (setting UdfAccess, default is UdfAccess = Restrict UDF in Firebird 2.5 and 3.0 which will allow use of the udf directory, note that in Firebird 4.0 the default is UdfAccess = None, which disables loading of UDFs).
After doing this, restart Firebird, and Firebird should then be able to load the UDF.
Option 2
Hunt down and remove or replace all usages of the UDF.
A query to find the dependencies of UDF POS is (at least for top-level dependencies of the UDF, and table/view columns):
select
dependent_type.RDB$TYPE_NAME dependent_type,
rd.RDB$DEPENDENT_NAME,
rrf.RDB$RELATION_NAME dependent_table,
rrf.RDB$FIELD_NAME dependent_column,
rf.RDB$COMPUTED_SOURCE dependent_expression,
depended_on_type.RDB$TYPE_NAME depended_on_type,
rd.RDB$DEPENDED_ON_NAME
from RDB$DEPENDENCIES rd
inner join RDB$TYPES dependent_type
on rd.RDB$DEPENDENT_TYPE = dependent_type.RDB$TYPE and dependent_type.RDB$FIELD_NAME = 'RDB$OBJECT_TYPE'
inner join RDB$TYPES depended_on_type
on rd.RDB$DEPENDED_ON_TYPE = depended_on_type.RDB$TYPE and depended_on_type.RDB$FIELD_NAME = 'RDB$OBJECT_TYPE'
left join RDB$RELATION_FIELDS rrf
on dependent_type.RDB$TYPE_NAME IN ('COMPUTED_FIELD', 'FIELD') and rrf.RDB$FIELD_SOURCE = rd.RDB$DEPENDENT_NAME
left join RDB$FIELDS rf
on dependent_type.RDB$TYPE_NAME IN ('COMPUTED_FIELD', 'FIELD') and rf.RDB$FIELD_NAME = rd.RDB$DEPENDENT_NAME
where rd.RDB$DEPENDED_ON_TYPE = 15
and rd.RDB$DEPENDED_ON_NAME = 'POS'
For example, if the UDF is used for a calculated column in table X, column CALCULATED_POS with expression POS(A, B), then you can drop it with
alter table X drop CALCULATED_POS
However, if this column has dependencies, or you still need it, this might not be so easy (or not a good idea).
You can also try to alter it to remove the dependencies, by replacing the expression for an equivalent using built-in functions, or just a dummy expression to get things to work. For example, say POS is equivalent to the built-in POSITION, then you could do something like:
alter table X alter CALCULATED_POS generated always as (POSITION(A, B))
You will need to make similar modifications if the dependencies is used in a view, check constraint (listed as a trigger), trigger or stored procedure.

When I create a local xml file I get an error importing carrot2

I am trying to create a local xml file for import. The tags specified in https://doc.carrot2.org/#figure.input-xml-format give me an error. Specifically, I get the error:
"Failed to read attributes from:
/lungo/home/holz/nestlib/extras/text/carrot2/goodpubmed.xml Element
'query' does not have a match in class
org.carrot2.util.attribute.AttributeValueSets at line 2".
If I remove query, I get the error with 'document' Element. I have just downloaded the latest version for linux with java 1.8.
You're probably trying to load your XML file to the attribute view rather than process it through the clustering algorithm. Here's how you can pass your XML file as data for clustering: http://doc.carrot2.org/#section.getting-started.xml-files.

Adding metadata to PDF

I need to add metadata to a PDF which I am creating using prawn. That meta-data will be extracted later by, probably, pdf-reader. This metadata will contain internal document numbers and other information needed by downstream tools.
It would be convenient to associate meta-data with each page of the PDF. The PDF specification claims that I can store per-page private data in a "Page-Piece Dictionary". Section 14.5 states:
A page-piece dictionary (PDF 1.3) may be used to hold private
conforming product data. The data may be associated with a page or
form XObject by means of the optional PieceInfo entry in the page
object (see Table 30) or form dictionary (see Table 95). Beginning
with PDF 1.4, private data may also be associated with the PDF
document by means of the PieceInfo entry in the document catalogue
(see Table 28).
How can I set a "page-piece dictionary" with prawn? I'm using prawn 0.12.0.
If that's not possible, how else can I achieve my goal of storing metadata about each page, either at the page level, or at the document level?
you can look at the source of prawn
https://github.com/prawnpdf/prawn/commit/131082af5abb71d83de0e2005ecceaa829224904
info = { :Title => "Sample METADATA",
:Author => "Me",
:Subject => "Not Working",
:CreationDate => Time.now }
#pdf = Prawn::Document.new(:template => filename, :info => info)
One way is to do none of the above; that is, don't attach the metadata as a page-piece dictionary, and don't attach it with prawn. Instead, attach the metadata as a file attachment using the pdftk command-line tool.
To do it this way, create a file with the metadata. For example, the file metadata.yaml might contain:
---
- :document_id: '12345'
:account_id: 10
:page_numbers:
- 1
- 2
- 3
- :document_id: '12346'
:account_id: 24
:page_numbers:
- 4
After you are done creating the pdf file with prawn, then use pdftk to attach the metadata file to the pdf file:
$ pdftk foo.pdf attach_files metadata.yaml output foo-with-attachment.pdf
Since pdftk will not modify a file in place, the output file must be different than the input file.
You may be able to extract the metadata file using pdf-reader, but you can certainly do it with pdftk. This command unpacks metadata.yaml into the unpacked-attachments directory.
$ pdftk foo-with-attachment.pdf unpack_files output unpacked-attachments

BIRT: Specifying XML Datasource file as parameter does not work

Using BIRT designer 3.7.1, it's easy enough to define a report for an XML file data source; however, the input file name is written into the .rptdesign file as constant value, initially. Nice for the start, but useless in real life. What I want is start the BIRT ReportEngine via the genReport.bat script, specifying the name of the XML data source file as parameter. That should be trivial, but it is surprisingly difficult...
What I found out is this: Instead of defining the XML data source file as a constant in the report definition you can use params["datasource"].value, which will be replaced by the parameter value at runtime. Also, in BIRT Designer you can define the Report Parameter (datasource) and give it a default value, say "file://d:/sample.xml".
Yet, it doesn't work. This is the result of my Preview attempt in Designer:
Cannot open the connection for the driver: org.eclipse.datatools.enablement.oda.xml.
org.eclipse.datatools.connectivity.oda.OdaException: The xml source file cannot be found or the URL is malformed.
ReportEngine, started with 'genReport.bat -p "datasource=file://d:/sample.xml" xx.rptdesign' says nearly the same.
Of course, I have made sure that the XML file exists, and tried different spellings of the file URL. So, what's wrong?
What I found out is this: Instead of defining the XML data source file as a constant in the report definition you can use params["datasource"].value, which will be replaced by the parameter value at runtime.
No, it won't - at least, if you specify the value of &XML Data Source File as params["datasource"].value (instead of a valid XML file path) at design time then you will get an error when attempting to run the report. This is because it is trying to use the literal string params["datasource"].value for the file path, rather than the value of params["datasource"].value.
Instead, you need to use an event handler script - specifically, a beforeOpen script.
To do this:
Left-click on your data source in the Data Explorer.
In the main Report Design pane, click on the Script tab (instead of the Layout tab). A blank beforeOpen script should be visible.
Copy and paste the following code into the script:
this.setExtensionProperty("FILELIST", params["datasource"].value);
If you now run the report, you should find that the value of the parameter datasource is used for the XML file location.
You can find out more about parameter-driven XML data sources on BIRT Exchange.
Since this is an old thread but still usefull, i ll add some info :
In the edit datasource, add some url to have sample data to create your dataset
Create your dataset
Then remove url as shown
add some script

Embedding documents in existing documents with the Ruby Driver for MongoDB

I'm trying to embed a document inside an existing document using the Ruby Driver.
Here's what my primary document looks like:
db = Mongo::Connection.new.db("Portfolios")
project_collection = db.collection("Projects")
new_Project = { :url => 'http://www.tekfolio.me/billy/portfolio/focus', :author => 'Billy'}
project_collection.insert(new_Project)
After I've created my new_project and added it to my project_collection I may or may not add another collection to the same document later called assets. This is where I'm stuck. The following code doesn't seem to do anything:
new_asset = { :image_url => 'http://assets.tekfolio.me/portfolios/68fbb25a-8353-41a8-a779-4bd9762b00f2/projects/13/assets/20/focus2.PNG'}
new_Project.assest.insert(new_asset)
I'm certain I've butchered my understanding of Mongodb and the Ruby driver and the embeded document concept and would appreciate your help getting me out of this wet paper bag I can't seem to get out of ;)
Have you tried just setting the value of asset without insert and instead using update?
new_Project["asset"] = new_asset
project_collection.update({"_id" => new_Project["_id"]}, new_Project)
I think , are you trying to "update" the new_project record with the asset
it doesn't work because then you are only updating the hash in ruby, not in mongo, you have to first get the reference to the object in mongo, update it, and then save it, check this info:
http://www.mongodb.org/display/DOCS/Updating+Data+in+Mongo
(if you can, you can assign the asset before inserting, and it should work)

Resources