Which devices have 800x600 screen resolution? - device

Which devices (or simulators??) have 800x600 screen resolution?
I can see in the Google Analytics of my own website that a significant percentage (3%) of devices have this resolution. I can also see this at https://www.screenresolution.org/ (4%) and in the csv download from https://gs.statcounter.com/screen-resolution-stats/desktop/worldwide (1%).
Could this percentage be due to e-readers?

800x600 is a common default for virtual machines/server monitors/old computers/eReaders. Basically any computer that has a squarish screen (400x300)x2 is usually 800x600. Common defaults for virtual machines are 800x600. Feel free to add any others!

Related

Why does a video larger than 8176 x 4088 created using AVFoundation come out with a uniform dark green color on my Mac?

When I use AVFoundation to create an 8K (7680 x 4320) MP4 with frames directly drawn onto pixel buffers obtained from the pixel buffer pool, it works with kCVPixelFormatType_32ARGB.
However if I use kCVPixelFormatType_32BGRA, the entire video has a uniform dark green color instead of the actual contents. This problem occurs for resolutions above 8176 x 4088.
What could be causing this problem?
AVAssetWriter.h in SDK 10.15 and in SDK 11.3 says:
The H.264 encoder natively supports ... If you need to work in the RGB domain then kCVPixelFormatType_32BGRA is recommended on iOS and kCVPixelFormatType_32ARGB is recommended on OSX.
AVAssetWriter.h in SDK 12.3 however says:
The H.264 and HEVC encoders natively support ... If you need to work in the RGB domain then kCVPixelFormatType_32BGRA is recommended on iOS and macOS.
AVAssetWriter.h on all three SDKs however also says:
If you are working with high bit depth sources the following yuv pixel formats are recommended when encoding to ProRes: kCVPixelFormatType_4444AYpCbCr16, kCVPixelFormatType_422YpCbCr16, and kCVPixelFormatType_422YpCbCr10. When working in the RGB domain kCVPixelFormatType_64ARGB is recommended.
Whatever be the recommendations, the below prelude states that all of them are just for optimal performance and not for error free encoding!
For optimal performance the format of the pixel buffer should match one of the native formats supported by the selected video encoder. Below are some recommendations:
Now, Keynote movie export with H.264 compression also results in the same problem with the same size limits on my Mid-2012 15-inch Retina MacBook Pro running Catalina (supports upto Keynote 11.1). This problem doesn't occur on a later Mac running Monterey where the latest version 12.2 of Keynote is supported.
I have not included code because Keynote movie export is a simple means to reproduce and understand the problem.
My motivation for asking this question is to obtain clarity on:
What is the right pixel format to use for MP4 creation?
What are safe size limits under which MP4 creation will be problem free?

ffmpeg: transmission problems / artifacts in rtsp screen grab - might be a WiFi problem

In short: Is there a way to "force" ffmpeg to not save a grabbed frame if there are transmission problems? Or any other software that does the same and I just don't know of?
Long story:
Updating my house surveillance from almost 10 years old DCS-932L cameras to Tapo C100 Cameras, I changed the image delivery method from ftp push to rtsp grab via ffmpeg.
I had written a program in C++ to check for "bad" pictures from the old cameras, where parts of the picture tended to be simply black once every minute or so (I'm grabbing a pic every 2 seconds). The Tapo C100 doesn't feature ftp-push, thus I tried (after a few days trying)
ffmpeg.exe -y -i rtsp://user:pass#10.0.0.%ld:554/stream1 -vframes 1 %scamera\rtsp.jpg -loglevel quiet
This works absolutely perfect in my main house, which features a Fritz!Box 7590 and a set of Fritz!Powerline (510/and two 540e) repeaters, plus one WiFi repeater Fritz 600) as my phone line and the router are in the basement.
In my holiday home, though, it doesn't. The Wifi is managed by a Hybrid DSL/5G - box I have no alternative to, which is a Huawei DN9245W and works as DHCP Server, because this is almost impossible to change. Everything "real" is managed by another Fritz!Box 7590, connected via ethernet, and another set of Fritz!Powerline 510 and two 540e repeaters plus half a dozen Wifi Repeaters, mostly Fritz! 310, 450E and 600. The house was partially built with local stones, which are very iron-y, and there's a lot of metallized glass. Full set is show in Image
Now, this does produce different artifacts, about two per minute or in every 15th picture, see
Image with artifacts No. 1
Thinking this might be a transmission problem, I tried forcing the streamgrab via TCP, because while rtsp doesn't have error correction, TCP does:
ffmpeg.exe -rtsp_transport tcp -i
rtsp://user:pass#10.0.0.%ld:554/stream1 -y -f image2 -update 1 -r
1 -vframes 1 -qscale:v 30 %scamera\rtsp.jpg -loglevel quiet
Which didn't change the artifacts much, see Image with artifacts No. 2
The house now has a total of 12 Cameras, six of which are each "managed" by an older Dell Optiplex Desktop bought used off ebay with an i3 or i5 processor from about 2015, which goes to about 65% load. My software will check if the grabbed picture is finished saving (to RAMdisk), rename it, check if there are artifacts, if so, drop it, if not, convert to bitmap and then compare it to previous image, guess if there's a change, mark that change with a rhombus and rate it, save that as a jpeg file, and then some other stuff that's not relevant here. See: Image of my program running with six cameras
I did try grabbing keyframes only, but a bunny or deer or burglar hopping through my property doesn't produce a keyframe, so that turned out to be missing the point.
I'm out of ideas here. It does work flawlessly in the main house. It doesn't in the holiday house. I can hardly install more repeaters; I already tried mesh and not-mesh, and the problem isn't exactly wifi overload, because even with just one camera running, it still persists. In certain places. Some have no problems. Reasons? No clue. I really hope someone has a good idea.
I got a couple of these cameras and working on what I'd call a poor quality wifi network I had similar problems until I switched the camera via the app to 1080P mode. After that the frames seem to fine.
My camera defaulted to 720P mode plus the always available? 320P stream2.
My $0.02. Thanks for your post BTW.

quickest way to add image watermark on video in andorid?

I have use ffmpeg and mp4parser to add image watermark on video.
both works when video size is small like less than 5MB to 7Mb but
when it comes to large video size(anything above than 7MB or so..)
it fails and it doesn't not work.
what are the resources that helps to adding watermark on video quickly. if you have any useful resources that please let me know?
It depends on what exactly you need.
If the watermark is just needed when the video is viewed on the android device, the easiest and quickest way is to overlay the image with a transparent background over the video view. You will need to think about fullscreen vs inline and portrait vs landscape to ensure it lines up as you want.
If you want to watermark the video itself, so that the watermark is included if the video is copied or sent elsewhere, then ffmpeg is likely as fast as other solutions on the device itself. If you are able to send the video to a server and have the watermark applied there you will have the ability to use much more powerful compute resource.

Get/set video resolution when capturing image

I'm capturing images from my webcam with some code that mainly bases on this: Using the Sample Grabber.
Here I only get the default resolution of 640x480 while the connected camera is able to show more (other capture applications show a bigger resolution).
So, how can I:
retrieve the list of available resolutions
set one of these resolutions so that the captured image comes with it?
IAMStreamConfig interface lists capabilities and lets you select resolution of interest. enumerating media types on an unconnected yet pin will list you specific media types (amd resolutions) the camera advertises as supported.
More on this (an links from there):
Video recording resolution using DirectShow
Video Capture output always in 320x240 despite changing resolution

Where does directshow get image dimensions from?

We are using a directshow interface to capture images from a video stream. These images are presented in a fixed size window.
Once we have captured an image we store it as a bitmap. Downstream we have the ability to add annotation to the image, for example letters in a fixed size font.
In one of our desktop environments, the annotation has started appearing at half the size that it normally appears at. This implies that the image we are merging the text onto has dimensions that are maybe twice as large.
The system that this happens on is a shared resource as in some unknown individual has installed software on the system that differs from our baseline.
We have two approaches - the 1st is to reimage the system to get our default text size behaviour back. The 2nd is to figure out how directshow manages image dimensions so that we can set the scaling on the image correctly.
A survey of the directshow literature indicates that the above is not a trivial task. The original work was done by another team that did not document what they did. Can anybody point us in the direction of what directshow object we want to deal with to properly size the sampled image?
DirectShow - as a framework - does not deal with resolutions directly. Your video source (such as capture hardware) is capable of providing video feed in certain resolution which you possibly can change. You normally use IAMStreamConfig as described in Configure the Video Output Format in order to choose capture resolution.
Sometimes you cannot affect capture resolution and you need to resample the image in whatever dimensions you captured it. There is no stock filter for this, however Media Foundation provides a suitable Video Resizer DSP which does most of the task. Unfortunately it does not fit DirectShow pipeline smoothly, so you need fitting and/or custom filter for resizing.
When filters connect in DirectShow, they have an AM_MEDIA_TYPE. Here you will find a VIDEOINFOHEADER with a BITMAPINFOHEADER and this header has a biWidth and biHeight.
Try to build the FilterGraph manually (with GraphEdit or GraphStudioNext) and inspect these fields.

Resources