How to extract files from Qlik Sense website - download

I have executed this code in the Qlik website app
myTable:
LOAD *
FROM [lib://DataFiles/orgFile.xlsx]
(ooxml, embedded labels, table is sheet1);
Store * from myTableinto [lib://DataFiles/transFile.qvd];
The qvd file is stored in the DataFiles web folder but I don't know how to reach it. I whould like to download de qvd file or save it in C:/somefolder

DataFiles is the name of a data connection. If you have the right permissions (edit data connections in the Hub and/or read data connection in QMC) you can see the connection string. The connection will be the path (local or network) where the qvds will be stored.
The path of all folder data connections is from the point of view of Qlik. If new data connection is created with path C:/some-folder then this path will be actually pointing to the C: on the server where the reload is being performed.
Folder data connections can be set to network path as well. The only "condition" is Qlik to be able to access the network path. Technically you can create folder on your machine and share it and then create data connection (or edit an existing one) to write the qvd files there. The obvious downside is that if your machine is offline then any apps, relying on this data connection, can't be reloaded.
Probably the easiest approach, in your case, is to find the connection string of DataFiles data connection.
If you have access to QMC (and have the correct permissions there) you can see the connection string:
If you have the edit data connection permissions you can find the path by trying to edit the data connection itself:
P.S. And again ... the connection path is from Qlik's point of view. So in my example C:/QlikData/Apps folder is actually on the QS server itself and not my machine.

Related

How to migrate a PostgreSQL 10 database from Windows C drive to another drive

I have almost an identical problem as this post:
How to migrate a Windows 10 installation of PostgreSQL 9.5.7 to a larger disk
I have a PostgreSQL database on my C drive which is running out of space. I want to move my database to my larger F drive. I'm running into the same issue as the user in the post I mentioned:
The path to executable under the service to start my server is
C:\PostgreSQL\pg10\pgservice.exe "//RS//PostgreSQL 10 Server"
There's no specific path to the data directory explicitly written. I'm not sure how to change where PostgreSQL looks to store data since there's not a -D variable defined there.
I think if I just copy my data over to the larger drive and pass the new data directory as a parameter argument on startup, my issue would be solved. Any ideas on how to do this given my current configuration?
I won’t call it migration rather just transferring files from one location to another.
It can be done by:
Stopping database server
Cut/paste data to your new drive location
reconfigure database server to use new location
Start server again or restart system if needed

Transferring multiple files though Informatica FTP connection

I have a requirement to generate the target file in Informatica with date/time appended to it. How will the Informatica FTP connection identify such dynamic file name with date appended to its name?
Also I would like to know if it is possible to FTP multiple files at a time via Informatica FTP connection. Please someone help me on this.
Its actually pretty simple, you just have to use the part of the file name that is constant and then place a *
for eg:
Myfile_20190607.txt
Myfile_20190507.txt
If i specify Myfile_2019* , this is good enough to pickup the files soecified above. You may have to play with the * and criteria to fit the files that you need.
Note: if you are sending files to third party, try to use SFTP instead of plain old ftp and most organization blocked ftp to outside ip's.
As far as I know until Informatica 9.x, it is neither possible to generate dynamic filename nor create multiple files using FTP connection. Only option was to create the files on Informatica server and then run a script to FTP them over to the destination server.
Here is how:
edit your workflow, choose variables tab, create a workflow
variable with datatype NSTRING; assume the variable name is
$wf_timestamp;
create an assignment task and assign TO_CHAR(SYSDATE,'YYYYMMDD')
to the variable in the assignment task;
edit session: Choose Mapping tab, choose your target; then
Connections; then edit FTP Value; then in the Remote Filename
attribute, enter your filename with the timestemp, eg,
myfile_$$$wf_timestamp.csv;
put your assignment before you session in your workflow.
that's it.

Cannot access files on FTP server from Azure Data Factory

I currently have access to a third party's FTP server which, upon login, automatically redirects me to a directory that does not contain the files I am trying to download.
ftp://ftp.fakehost.com -> ftp://ftp.fakehost.com/uselessDir
My files are in ftp://ftp.fakehost/usefulDir.
This ftp server does not support directory traversal so I cannot get to usefulDir by simply modifying my url. FileZilla works since I can execute specific ftp commands to get to the directory I want.
Can a Data Factory FTP service or dataset be customized to work around this problem since Data Factory cannot access the usefulDir directly ?
Please correct me if I doesn't understand your question correctly. Have you tried create a dataset and manually put the usefulDir in folderPath property directly, instead of using the Authoring UI to navigate to that folder (which is not possible based on your description.)

Creating a mssql database backup using odbc on mac

So, MSSQL is nice enough to have given us a nifty little sql code for creating a database backup from a command line:
BACKUP DATABASE [db_name] TO DISK = N'D:\backups\back.bak' WITH NOFORMAT, NOINIT, NAME = N'db_name', SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
However, I am looking to be able to run this command from a php or even shell script on a remote Mac server.
The Problem I am running into is when I try to change the DISK to say my admin home directory, it keeps complaining to me about:
Cannot open backup device 'D:\PATH\ON\SERVER\/Users/admin/back.bak'. Operating system error 3(The system cannot find the path specified.).
Anyone know what I am missing here? I would be very appreciative
SQL Server's BACKUP command does a backup to the database server's local disk. That means that setting the path to a directory on the client machine makes no sense.
If you want a database backup stored on your client machine, I can basically see 3 options;
Back up to a temporary location accessible from the database server, and copy it from there to your client.
Mount a disk shared from your client machine on your database server as for example X:\ and do the backup to that disk.
Find another backup solution that does backups in a different way (sorry, no, I have no recommendations)
You can use RasorSQL, it's a client for mac and windows.
https://razorsql.com/

What's the best way to (programatically) determine a file's network origin?

For an application I'm writing, i want to programatically find out what computer on the network a file came from. How can I best accomplish this?
Do I need to monitor network transactions or is this data stored somewhere in Windows?
When a file is copied to the local system Windows does not keep any record of where it was copied. So unless the application that created it saved such information in the file then it will be lost.
With file auditing file and directory operations can be tracked, but I don't think that will include the source path with file copies (just who created it and when).
Yes, it seems like you would either need to detect the file transfer based on interception of network traffic, or if you have the ability to alter the file in some way, use public key cryptography to sign files using a machine-specific key before they are transferred.
Create a service on either the destination computer, or on the file hosting computers which will add records to an Alternate Data Stream attached to each file, much the way that Windows handles ZoneInfo for files downloaded from the internet.
You can have a background process on machine A which "tags" each file as having been tagged by machine A on such-and-such a date and time. Then when machine B downloads the file, assuming we are using NTFS filesystems, it can see the tag from A. Or, if you can't have a process at the server, you can use NTFS streams on the "client" side via packet sniffing methods as others have described. The bonus here is that future file-copies will retain the data as long as it is between NTFS systems.
Alternative: create a requirement that all file transfers must be done through a Web portal (as opposed to network drag-and-drop). Built in logging. Or other type of file retrieval proxy. Do you have control over procedures such as this?

Resources