Is it possible to retrieve original filename from locally saved file in Laravel 5 that has been hashed - laravel-5

Is there a way possible to get the original file name of an already saved file that has been hashed into storage?
I can find hundreds of threads of how to get the original file name of the uploaded file, but these files have already been saved to disk with a hashed filename using $request->File->store()
My use case is that I want to create a controller that returns all saved images from disk, so the user can view them in the browser. I want to allow them to search on filenames, but all the filenames are hashed, and I can't work out how to find the original filename.
These files have not been saved into a database, they are simply sitting in the public images directory. I can use the Storage facade to retrieve Size and LastModified details, but nothing else.
Is it at all possible?

Related

Decrypting an sqlcipher database manually?

I have a slightly modified sqlcipher database. The database has been prepended with a json blob, which makes it so that the file cannot be read by pysqlcipher in the usual manner. I would like to open the database, ignoring the json blob.
I know that I could simply split the file into two files, open the db, do what I need and merge them, or create a temp copy of the db, but both of those solutions are undesirable.
I've attempted to use a few manual decryption tools like pysqlcimplecipher and sqlcipher-tools,to decrypt the raw bytes of the database portion of the file, but I've been unable to get it to work. It also seems like the sqlite3 deserialize function might do what I need, but I'm unsure.
Is it possible to somehow read the file into memory and pass that into pysqlcipher?

Transfer CSV files from azure blob storage to azure SQL database using azure data factory

I need to transfer around 20 CSV files inside a folder named ActivityPointer in an azure blob storage container to Azure SQL database in a single data factory pipeline, but ActivityPointer contains 20 CSV files and another folder named snapshots inside it. So when I try to create a pipeline and give * to select all the CSV files inside ActivityPointer it includes the snapshots folder too, which should not be included. Is there any possibilities to complete this task. Also I can't create another folder to transform the snapshots folder into it. What can I do now? Anyone can please help me out.
Assuming you want to copy all CSV files within ACtivityPointer folder,
You can use wildcard expression as below :
you can provide path till Active folder and than *.csv
Copy data is also considering the inner folder while using wildcards (even if we use .csv in wildcard file path). So, we have to validate whether it is a file or folder. Please look at the following demonstration.
First use Get Metadata on the required folder with field list as Child items. The debug output will be:
Now use this to iterate through child items using For each activity.
#activity('Get Metadata1').output.childItems
Inside for each, use if condition activity to check whether the current item is a file or not. Use the following condition.
#equals(item().type,'File')
When this is true, you can use copy data to complete copying the file to target table (Ignore the false case). I have create file_name parameter in my source dataset passing its value as #item().name().
This will help you to achieve your requirement. The following is the debug output. I have 4 files and 1 folder. The folder will be ignored, and the rest will be copied into the target table.

Load new files only from FTP to BLOB Azure data factory

I am trying to copy files from an FTP to Blob , the probleme is that my pipeline copies all files including the old ones. I would like to do an incremental load by only copying new files. how do U configure this. BTW in my FTP dataset the parameters ModifiedStartDate and ModifiedEndDate are not showing. I would also like to configure theses dates dynamically
Thank you!
There's some work to be done in Azure Data Factory to get this to work. What you're trying to do, if I understand correctly, is to Incrementally Load New Files in Azure Data Factory. You can do so by looking up the latest modified date in the destination folder.
In short (see the above linked article for more information):
Use Get Metadata activity to make a list of all files in the Destination folder
Use For Each activity to iterate this list and compare the modified date with the value stored in a variable
If the value is greater than that of the variable, update the variable with that new value
Use the variable in the Copy Activity’s Filter by Last Modified field to filter out all files that have already been copied

How do I add multiple csv files to the catalog in kedro

I have 4 csv files in Azure blob storage, with same metadata, that i want to process. How can i add them to the datacatalog with a single name in Kedro.
I checked this question
https://stackoverflow.com/questions/61645397/how-do-i-add-many-csv-files-to-the-catalog-in-kedro
But this seems to load all the files in the given folder.
But my requirement is to read only given 4 from many files in the azure container.
Example:
I have many files in azure container in which are 4 transaction csv files with names sales_<date_from>_<date_to>.csv, i want to load these 4 transaction csv files into kedro datacatalog under one dataset.
For starters, PartitionedDataSet is lazy, meaning that files are not actually loaded until you explicitly call that function. Even if you have 100 CSV files that get picked up by the PartitionedDataSet, you can select the partitions that you actually load/work with.
Second, what distinguishes these 4 files from the others? If they have a unique suffix, you can use the filename_suffix option to just select them. For example, if you have:
file_i_dont_care_about.csv
first_file_i_care_about.csv
second_file_i_care_about.csv
third_file_i_care_about.csv
fourth_file_i_care_about.csv
you can specify filepath_suffix: _file_i_care_about.csv.
Don’t think there’s a direct way to do this , you can add another subdirectory inside the blob storage with the 4 files and then use
my_partitioned_dataset:
type: "PartitionedDataSet"
path: "data/01_raw/subdirectory/"
dataset: "pandas.CSVDataSet"
Or in case the requirement of using only 4 files is not going to change anytime soon ,you might as well pass 4 files in the catalog.yml separately to avoid over engineering it.

Difference between Database and File Storage in Parse.com

Based on the FAQ at Parse.com:
What is the difference between database storage and file storage?
Database storage refers to data stored as Parse Objects, which are
limited to 128 KB in size. File storage refers to static assets that
are stored using the Parse File APIs, typically images, documents, and
other types of binary data.
Just want some clarification here:
So the Strings, Arrays etc created are considered as Parse Objects and would fall under the database storage, also the URL of the file will be considered under the database storage since it is a Parse Object. But the actual files itself are considered under File Storage?
Thanks.
Yes. Any file that you upload to Parse goes to the File storage, the rest is stored in the database including the URL of such files.

Resources