OpenElec XMBC image reduces SD card to 128 MB - image

I'm preparing an SD card with OpenElec XMBC to use in a Raspberry Pi. To start with, I have formatted the SD Card using this software. Then I followed the steps on this page to load the image on the SD card. Before writing the image to the SD card, the size is around 4 GB (as it should be). After writing the image, the size of the SD card goes back to around 128 MB. If I format the card again, it returns to 4 GB. Re-writing the image puts it back at 128 MB again.
I'm still awaiting the delivery of my Raspberry Pi to test it, however I find it hard to imagine that this is normal behavior or that the Raspberry Pi would recognize the 4GB. Has anybody had this issue?
EDIT:
I'm using Windows 8.1
UPDATE:
Tried it in my Raspberry Pi and it is showing 1G. Still 3GB missing.

It's probably re-written the partition table and created partitions that your OS doesn't recognise. I'd be willing to bet that the only partition your OS recognises is the one that's 128mb - OpenELEC is Linux based, so for example one of the partitions will be Linux Swap
If you DD/bitwise copy (which is what I assume the util is doing) an image that's only 1 gig, it would only show the disk as being 1 gig. You can resize or create a new partition.

Related

Using more than 2 NV_ENC at a time with FFMPEG

I'm currently generating timelapse videos using a thread on my CPU with fluent-ffmpeg running on nodejs. It takes roughly 1 minute to generate a 10 second timelapse. I'm generating many at the same time (basically one per thread) such that I tend to get the best performance at 8 worker threads. ... overall system throughput is about one video per 12 seconds.
GPU processing using h264_nvenc takes the single-thread time to about 3-4 seconds. Yippie! I went out and bought some nVidia 1660's to take advantage.
Unfortunately, when I go to generate the 3rd simultaneous video, I get "Conversion Failed!" error from FFMPEG.
Some basic research seems to show you can only 2 at a time. Perhaps 3 with updated drivers.
Is there a method around this? Posts from here indicates this limit is artificial and can be worked around: https://www.techpowerup.com/268495/nvidia-silently-increases-geforce-nvenc-concurrent-sessions-limit-to-3
Perhaps a way to use all the cuda/tensor/etc cores to render timelapse videos instead of just relying on the limited nv_enc?
Current limit of 3 renders on both my GTX 1060 and my RTX 2080 Ti. Other post says GTX 1660 is same. So this is obviously an artificial limit. Looking at the nVidia link posted above, which has a list of cards and their NVENC/NVDEC capabilities, it looks like most nVidia gaming cards themselves have this 3-render limit. However, most of the modern (Pascal and up) Quadro cards allow unlimited renders per card. As another workaround, you can put multiple gaming cards in a system. FFmpeg has a function to send a particular job to the card of your chosing. The same encoder module is in the GTX1660 as is in the RTX 2080 Ti, so there shouldn't be much speed difference between low-end and high-end cards. Maybe some minor difference from the memory bus width, but I haven't compared 1660/2080Ti directly to each other. What I'm saying is: if you need more encoding horsepower, just buy another couple low-end cards and divide up the workload using FFmpeg's builtin functionality.

STM32F411 I need to send a lot of data by USB with high speed

I'm using STM32F411 with USB CDC library, and max speed for this library is ~1Mb/s.
I'm creating a project where I have 8 microphones connected into ADC line (this part works fine), I need a 16-bit signal, so I'm increasing accuracy by adding first 16 signals from one line (ADC gives only 12-bits signal). In my project, I need 96k 16-bit samples for one line, so it's 0,768M signals for all 8 lines. This signal needs 12000Kb space, but STM32 have only 128Kb SRAM, so I decided to send about 120 with 100Kb data in one second.
The conclusion is I need ~11,72Mb/s to send this.
The problem is that I'm unable to do that because CDC USB limited me to ~1Mb/s.
Question is how to increase USB speed to 12Mb/s for STM32F4. I need some prompt or library.
Or maybe should I set up "audio device" in CubeMX?
If small b means byte in your question, the answer is: it is not possible as your micro has FS USB which max speeds is 12M bits per second.
If it means bits your 1Mb (bit) speed assumption is wrong. But you will not reach the 12M bit payload transfer.
You may try to write (only if b means bit) your own class but I afraid you will not find a ready made library. You will need also to write the device driver on the host computer

Raspberry pi 3- windows 10 iot core sd card size reduced

I have Kingston SD card (16GB Class 10 , 45MB/s). I installed Windows 10 IoT Core. I noticed that sd card size is reduced to 63.7MB.
Is it normal or it is problem with card or some other issue?
Thanks
Normal. It has to do with how the SD card is provisioned.
The card size isn't actually reduced, anyway. It is just partitioned into a few filesystems that get mounted as the operating system boots up.
yes it is normal.
don't worry it total size is 16 GB.
the size is occupy by OS.

FTDI driver (Windows) FT_Write() issue with large (1KB) chunk - (version 2.12.16.0)

My application on PC sends a file (2 MB) in chunks of 1 KB to embedded device.
I use FTDI Windows driver, I use the classic FT_Write() API function as my code is cross-platform.
Note: These issues below appear when I use 1KB chunk size. Smaller chunk (I tried 64 bytes) works fine.
The problem is the function returns "0 byte sent" every couple hundred packets and stuck. I found a work around, by purging both TX and Rx, followed by ResetDevice() call recovered the chip. It still happened every couple hundred packets, but at least I can send the whole file (2 MB).
But when I use USB isolator (http://www.bb-elec.com/Products/USB-Connectivity/USB-Isolators/Compact-USB-Port-Guardian.aspx)
the work around failed.
I believe my work around is not a graceful solution.
Note: I use large chunk because of suggestion I found in FTDI application note below:
When writing data to an FTDI device, as much data as possible should
be buffered in the application and written to the device in a single
write function call (either WriteFile for a VCP application using the
Win32 API, FT_Write if using the D2XX classic interface or
FT_WriteFile if using the D2XX FT_W32 interface). The result of this
is that the data will be written to the device with 64 bytes per USB
packet.
Any idea what's the proper fix for these issues? Is it related to FTDI initialization? My driver version is 2.12.16.0 (3/9/2016).
I also saw the same problem of API FT_Write() not working right if too much data was passed,
while working on the library for my USB device Nusbio.
I mostly work in the mode Synchronous Bitbanging rather than UART but after all it is the same
hardware, driver and API.
There are the USB 2.0 specification or the FTDI FT232RL specification and then there is
reality of the electron and bit. The expected numbers of transfer speed never really match at
least at first. In other words it is complicated (see more below in my referenced blog post).
In 2015 I was under the impression that with FTDI chip FT232RL the size of 384 bytes was working well
and the number comes from the chip datasheet (128 byte receive buffer and 256 byte transmit buffer).
Using a size of 500 bytes would still work but above 600 bytes thing would not work.
I later used the chip FT231X which has a larger buffer (1k, 512 byte receive buffer and 512 byte transmit buffer).
and was able to transfer with FT_Write() 1k and 2k buffer of data, therefore more than doubling my speed of transfer.
But above 2k things would not work.
In 2016, I read every thing you can read about FTDI USB 2.0 Full speed chip, I came to the
conclusion that FT_Write should support up to 64K (see datasheet for the following chip
FT232RL, FT231X, FT232H, FT260, FT4222).
I also did some research on faster serial port communication from .NET than 115200 baud.
Somehow I was able to update my C# library to send data in buffer of 32k in FT_Write() and it is
working with the FT232RL and the FT231X chip, but I can't tell you what changed.
I was probably not completely underdanding the in and out of the USB 2.0 full speed FTDI technology.
For example let's say you are using the FT232RL and transfering 384 bytes at the time with
FT_Write(). Knowing that there is at least a 1 milli-second latency in USB 2.0 full speed what ever you
do, you are transfering from a USB point of view 384*1000/1024, that is 375 K byte/s in theory
(that would be the max), that said now what is the baudrate supported by your embedded device.
What is the baudrate used?
The FT232RL max baudrate is 900 000 baud, which would give you only 900000/(1+8+1) == 87 K byte/S.
Right away you can tell there is going to be some problem, may be the FTDI driver takes care of
it or not. I can't tell.
Re do the math based on the baudrate supported by your embedded device, and a 384 byte buffer
sent 1000 per second, then slow down your USB speed with a sleep() to match your baud rate.
That is where I would start.

How do I partition a drive to an exact size in OSX Terminal?

I've got a 3TB drive partitioned like so:
TimeMachine 800,000,000,000 Bytes
TELUS 2,199,975,890,944 Bytes
I bought an identical drive so that I could mirror the above in case of failure.
Using DiskUtility, partitioning makes the drives a different size than the above by several hundreds of thousands of bytes, so when I try to add them to the RAID set, it tells me the drive is too small.
I figured I could use terminal to specify the exact precise sizes I needed so that both partitions would be the right size and I could RAID hassle-free...
I used the following command:
sudo diskutil partitionDisk disk3 "jhfs+" TimeMachine 800000000000b "jhfs+" TELUS 2199975886848b
But the result is TimeMachine being 799,865,798,656 Bytes and TELUS being 2,200,110,092,288 Bytes. The names are identical to the originals and I'm also formatting them in Mac OS Extended (Journaled), like the originals. I can't understand why I'm not getting the same exact sizes when I'm being so specific with Terminal.
Edit for additional info: Playing around with the numbers, regardless of what I do I am always off by a minimum of 16,384 bytes. I can't seem to get the first partition, TimeMachine to land on 800000000000b on the nose.
So here's how I eventually got the exact sizes I needed:
Partitioned the Drive using Disk Utility, stating I wanted to split it 800 GB and 2.2 TB respectively. This yielded something like 800.2GB and 2.2TB (but the 2.2 TB was smaller than the 2,199,975,890,944 Bytes required, of course).
Using Disk Utility, I edited the size of the first partition to 800 GB (from 800.2GB), which brought it down to 800,000,000,000 bytes on the nose, as required.
I booted into GParted Live so that I could edit the second partition with more accuracy than Terminal and Disk Utility and move it around as necessary.
In GParted, I looked at the original drive for reference, noting how much space it had between partitions for the Apple_Boot partitions that Disk Utility adds when you add a partition to a RAID array (I think it was 128 MB in GParted).
I deleted the second partition and recreated it leaving 128 MB before and after the partition and used the original drive's second partition for size reference.
I rebooted into OS X.
Now I couldn't add the second partition to the RAID because I think it ended up being slightly larger than the 2,199,975,890,944 Bytes required (i.e., it didn't have enough space after it for that Apple_Boot partition), I got an error when attempting it in Disk Utility.
I reformatted the partition using Disk Utility just so that it could be a Mac OS Extended (journaled) rather than just HSF+, to be safe (matching the original).
I used Terminal's diskutil resizeVolume [drive's name] 2199975895040b command to get it to land on the required 2,199,975,890,944 Bytes (notice how I had to play around with the resize size, making it bigger than my target size to get it to land where I wanted).
Added both partitions to their respective RAID arrays using Disk Utility and rebuilt them successfully.
... Finally.

Resources