I want to write data to zedboard's sdcard. I am able to write data to DRAM. Now I want to read DRAM's data and write it Sdcard. I have followed this (http://elm-chan.org/fsw/ff/00index_e.html) but it does not fulfill my requirement. I am not able to find any tutorial any example etc for this.
Please any tutorial link or any example. Thanks.
If you're using Vivado SDK, which I assume you are, it is really straightforward to use the SD Card.
To include the Fat File System, inside Xilinx SDK, open your Board Support Package (system.mss file) an select Modify this BSP's Settings. Under Overview, you can select xilffs.
Next, you must write the software to access the SD Card. This library offers a wide variety of functions. You can either look at here_1, in here_2 or in here_3. In this second reference is provided a wide variety of complex functions.
Aside from this, in order to use the SD Card, what you should basically do is written below. Note that this is just an overview, and you should refer to the references I gave you.
# Flush the cache
Xil_DCacheFlush();
Xil_DCacheDisable();
# Initialize the SD Card
f_mount(0, 0);
f_mount(0, <FATFS *>)
# Open a file using f_open
# Read and Write Operations
# Either use f_write or f_read
# Close your file with f_close
# Unmount the SD with f_mount(0,0)
Note that experience teaches me that you need to write to the file in blocks that are multiples of the block size of the file system, which for the FAT files syste, is typically 512 bytes. Writing less that 512 bytes and closing the file will make it zero bytes in length.
In new version of Xilffs (Fatfs) lib syntax is little changed.
New syntax is:
static FATFS FS_instance; // File System instance
const char *path = "0:/"; // string pointer to the logical drive number
static FIL file1; // File instance
FRESULT result; // FRESULT variable
static char fileName[24] = "FIL"; // name of the log
result = f_mount(&FS_instance, path, 0); //f_mount
result = f_open(&file1, (char *)fileName, FA_OPEN_ALWAYS | FA_WRITE); //f_open
May be this can help you.
Related
I have a .bin file in a blob in Azure Blob Storage.
I would like to use it to give to fasttext to use a method.
I tried it:
fr_embedding_file_path = "cc_fr_300_bin/cc.fr.300.bin"
fr_embedding_file = client.get_blob_client(blob=fr_embedding_file_path)
fr_embedding_file = fr_embedding_file.download_blob()
fr_model = fasttext.load_model(fr_embedding_file)
I think I have to do something else after fr_embedding_file = fr_embedding_file.download_blob() but don't know what. The size of bin file is 7GB and comes from https://fasttext.cc/docs/en/crawl-vectors.html
I have this message :
'TypeError: loadModel(): incompatible function arguments. The following argument types are supported: 1. (self: fasttext_pybind.fasttext, arg0: str) -> None
Invoked with: <fasttext_pybind.fasttext object at ...>, <azure.storage.blob._download.StorageStreamDownloader object at ...>'
What can I do ?
Please check if given references help to work around:
As per the error the arg0 (1st positional argument) should be a str(string).
fasttext.load_model accepts first argument as string or char and second argument is utf-8 encoding which is optional.
See if fasttext.load_model(str) can only load files from the local filesystem. Try to copy the data to the local filesystem and then load it from there, e.g.
Check this reference from Stack Overflow.
Try to download blob to a file and load that file
blob.download_to_filename(local_model_file)
model_1 = fasttext.load_model(local_model_file)
Please see if you can use methods like content_as_bytes or content_as_text or stream from
azure.storage.blob.StorageStreamDownloader class and iterate through them or try make the argument to str () as in reference 1 .
Please check below references to work around :
python 3.x - FastText: TypeError: loadModel(): incompatible function arguments - Stack Overflow
Issue on azure sdk for python (github.com)
load model from bytes in memory ยท Issue (github.com)
Azure Machine Learning Service - Stack Overflow
I am currently trying to make some TensorFlow Inference (C backend) using Boost::GIL (challenging). I need a few thinks, I have been able to load my png image (rgb8_image_t)
and did a conversion to rgb32_f_image_t.
I still need 3 thinks, the raw pointer of the data, memory allocated, and dimensions.
for the memory allocated unfortunately the function total_allocated_size_in_bytes() is private, so I did this:
boost::gil::view(dest).size() * boost::gil::view(dest).num_channels() * sizeof(value_type);
Which is valid, if I do not have any extra padding for alignment story. But does it exist any nice alternative?
For the dimension, I should match with numpy (from PILLOW), I hope both libraries are using the same memory layout pattern. From my understanding, by default, datas are interleaved and contiguous so, it should be good.
Last the raw pointer _memory, it is a private data member of the Image class with no dedicated function. boost::gil::view(dest).row_begin(0) returns a iterator on the first pixel but I not sure how I could get the pointer of the data _memory. Any suggestions ?
Thank you very much,
++t
ps: TensorFlow proposes a C++ backend, however, it is not installed from any package managers, and manipulate Bazel is beyond my strength.
GIL documentation pretty accurately documents the various memory layouts.
The point of the library, though, is to abstract away the memory layouts. If you require some representation (planar/interleaved, packed or unpacked) you are doing things "the hard way" for the library interface.
So, I think you can read and convert in one go, e.g. for a jpeg:
gil::rgb32f_image_t img;
gil::image_read_settings<gil::jpeg_tag> settings;
read_and_convert_image("input.jpg", img, settings);
Now getting the raw data is possible:
auto* raw_data = gil::interleaved_view_get_raw_data(view(img));
It happens to be the case that the preferred implementation storage is interleaved, which is likely what you're expecting. If your particular image storage is planar, the call will not compile (and you'd probably want planar_view_get_raw_data(vw, plane_index) instead).
Note that you'll have to reinterpret_cast to float [const]* if you need that, because there is not public interface to get a reference to the scoped_channel_value<>::value_, but the BaseChannelValue type is indeed float and you can assert that the wrapper doesn't add additional weight:
static_assert(sizeof(float) == sizeof(raw_data[0]));
Alternative Approach:
Conversely, you can setup your own raw pixel buffer, mount a mutable view into it and use that to read/convert your initial load into:
// get dimension
gil::image_read_settings<gil::jpeg_tag> settings;
auto info = gil::read_image_info("input.jpg", settings).get_info();
// setup raw pixel buffer & view
using pixel = gil::rgb32f_pixel_t;
auto data = std::make_unique<pixel[]>(info._width * info._height);
auto vw = gil::interleaved_view(info._width, info._height, data.get(),
info._width * sizeof(pixel));
// load into buffer
read_and_convert_view("input.jpg", vw, settings);
I've actually checked that it works correctly by writing out the resulting view:
//// just for test - doesn't work for 32f, so choose another pixel format
//gil::write_view("output.png", vw, gil::png_tag());
Hello im new at filter driver programming, i took windows swapBuffer example and i try to modify it to pritn me the file name for each read/write operation
and print the data tryed to read/ write.
i tried to do it like this:
FLT_PREOP_CALLBACK_STATUS SwapPreWriteBuffers(
_Inout_ PFLT_CALLBACK_DATA Data,
_In_ PCFLT_RELATED_OBJECTS FltObjects,
_Flt_CompletionContext_Outptr_ PVOID *CompletionContext)
{
/* here we do some logic that check that we want to write more the
0 bytes and get volume context and allocate aligned nonPaged memory
for the "swapping memory" , build a MDL and then if all succeed i try this:
*/
WCHAR filename[300] = {0};
wfprintf(filename, "%wZ\0", Data->Iopb->TargetFileObject->FileName);
if (NULL != wcstr(filename, L"try_me.txt"))
{
DbgPrint("Fname %S try to write %S\n", filename, Data->Iopb->Parameters.Write.WriteBuffe);
}
}
my main problem (i think) Data->Iopb->TargetFileObject->FileName is unicode and i dont know how to make the compae betwine this and a string of the file name
my outher problem is how do i print the buffer curretly to the dbg string without risking at getting blue screen? (i got alot from them laytly...) sometimes i get to this function without writing a string , how do i recognize the different and printing it saftely?
last question , are there any way to nake try except in drivers or all the faults are continue directly to blue screen?
thank you
p.s.
here is the link to the entire code (Without the additions I wrote (which I listed in this post above))
https://github.com/Microsoft/Windows-driver-samples/blob/master/filesys/miniFilter/swapBuffers/swapBuffers.c
File name are the concepts at IRP_MJ_CREATE. you can't rely on the name at irp_mj_write or read. so better to do it in create. also try to get the name from FltObject->FileObject->FileName it may give you the desired name of your file aling with other names too.
For printing the written data you need to check for Irp flag in minifilter it is from Data->Iopb->IrpFlags, IRP_PAGING_IO if its true then print the buffer using
KdPrint((".... "));
Yes, you can use try, except in driver too.
Hope this may help.
:)
I've written a duplicate finder in Java, but I need to include hard link support for it. Unfortunately, there seems to be no way to dig out a file's MFT entry in Java.
Although there is a method called fileKey() in the BasicFileAttributeView class, it won't work on the NTFS file system (I haven't tested it on ext yet).
I also found the method isSameFile() (in java.nio.file.Path). Does anyone know how this method works? It seems to be doing the right thing, but it returns a Boolean value, so it is worthless for me (I wish to put the results into a map and group them by their MFT entries).
I can always compare the creation times, modification times, etc. for each file, but this is just giving up.
Is there any way to accomplish what I am trying to do in either C++ or Java? I care more about making it work on NTFS than ext.
You would need to use the FILE_ID_FULL_DIRECTORY_INFORMATION structure along with the NtQueryDirectoryFile function (or the FILE_INTERNAL_INFORMATION structure along with the NtQueryInformationFile, if you already have a handle) inside ntdll.dll (available since Windows XP, if not earlier) to get the 8-byte file IDs and check if they are the same.
This will tell you if they are the same file, but not if they are the same stream of the same file.
I'm not sure how to detect if two files are the same stream from user-mode -- there is a structure named FILE_STREAM_INFORMATION which can return all the streams associated with a file, but it doesn't tell you which stream you have currently opened.
Detecting hard links is usually accomplished by calling FindFirstFileNameW. But there is a lower level way.
To get the NTFS equivalent to inodes, try the FSCTL_GET_OBJECT_ID ioctl code.
There's a unique (until the file is deleted) identifier in the BY_HANDLE_FILE_INFORMATION structure as well.
If the volume has an enabled USN Change Journal, you can issue the FSCTL_READ_FILE_USN_DATA ioctl code. Check the FileReferenceNumber member in the USN_RECORD structure
In Java you can use sun.nio.ch.FileKey which is a non-transparent enclosure for NTFS Inode. All the hard links share the same Inode.
Therefore, if you need to collect hard links, you can create FileKey from each suspect and compare them (e.g. by putting pairs of FileKey -> File into a Multimap)
I find fileKey is always null. Here is some code that can actually read the NTFS inode number. There remain many aspects I'm not happy with, not least, it relies on reflection.
import sun.nio.ch.FileKey;
import java.io.*;
import java.lang.reflect.Field;
import java.nio.file.Path;
class NTFS {
static long inodeFromPath(Path path) throws IOException, NoSuchFieldException, IllegalAccessException {
try (FileInputStream fi = new FileInputStream(path.toFile())) {
FileDescriptor fd = fi.getFD();
FileKey fk = FileKey.create(fd);
Field privateField = FileKey.class.getDeclaredField("nFileIndexHigh");
privateField.setAccessible(true);
long high = (long) privateField.get(fk);
privateField = FileKey.class.getDeclaredField("nFileIndexLow");
privateField.setAccessible(true);
long low = (long) privateField.get(fk);
long power = (long) 1 << 32;
long inode = high * power + low;
return inode;
}
}
}
I'm using an opaque API in some ruby code which takes a File/IO as a parameter. I want to be able to pass it an IO object that only gives access to a given range of data in the real IO object.
For example, I have a 8GB file, and I want to give the api an IO object that has a 1GB range within the middle of my real file.
real_file = File.new('my-big-file')
offset = 1 * 2**30 # start 1 GB into it
length = 1 * 2**30 # end 1 GB after start
filter = IOFilter.new(real_file, offset, length)
# The api only sees the 1GB of data in the middle
opaque_api(filter)
The filter_io project looks like it would be the easiest to adapt to do this, but doesn't seem to support this use case directly.
I think you would have to write it yourself, as it seems like a rather specific thing: you would have to implement all (or, a subset that you need) of IO's methods using a chunk of the opened file as a data source. An example of the "speciality" would be writing to such stream - you would have to take care not to cross the boundary of the segment given, i.e. constantly keeping track of your current position in the big file. Doesn't seem like a trivial job, and I don't see any shortcuts that could help you there.
Perhaps you can find some OS-based solution, e.g. making a loopback device out of the part of the large file (see man losetup and particularly -o and --sizelimit options, for example).
Variant 2:
If you are ok with keeping the contents of the window in memory all the time, you may wrap StringIO like this (just a sketch, not tested):
def sliding_io filename, offset, length
File.open(filename, 'r+') do |f|
# read the window into a buffer
f.seek(offset)
buf = f.read(length)
# wrap a buffer into StringIO and pass it given block
StringIO.open(buf) do |buf_io|
yield(buf_io)
end
# write altered buffer back to the big file
f.seek(offset)
f.write(buf[0,length])
end
end
And use it as you would use block variant of IO#open.
I believe the IO object has the functionality you are looking for. I've used it before for MD5 hash summing similarly sized files.
incr_digest = Digest::MD5.new()
file = File.open(filename, 'rb') do |io|
while chunk = io.read(50000)
incr_digest << chunk
end
end
This was the block I used, where I was passing the chunk to the MD5 Digest object.
http://www.ruby-doc.org/core/classes/IO.html#M000918