I am currently working on a Solana project where there are 4 different gifs (mp4) and I would like to turn one of them into 1500 NFTs, the second into 500, the third into 190 and the forth into 10.
I need a collection of 2200 NFTs with all the corresponding metadata (json) files that I can then use on the Candy Machine.
How should I do this?
Thanks
For 1 mp4 in the candy machine the easiest solution would be to use hiddensettings.
Since you have more than one I would propose to
create a asset folder where only these four assets + json files are inside (+placeholder png image for each)
Run the candy machine upload (This will upload all Assets and metadata it can find which is only 4)
Open the cache file (in the .cache folder) and have a look at the items{} array. This is what you will have to modify now. Duplicate each item as often as needed. Make sure that onChain is false and each line you add gets a new index. For the index the easiest no code solution is to use excel and get a bit creative with it.
After the cache file modification is done just run upload again to have the config lines written.
If you want to use mp4 you should have animation_url in your metadata and additionally a png as image for wallets/services which don’t support mp4. The process above is the same though. For more info about this check the metaplex token metadata standard
Well the first step is to take the gifs or mp4 animation and extract all the frames, meaning capture an image for every single frame until you get 2200 or however frames you need. If it's a video recoded at 24 fps (frames per second) , you'd need 91 seconds of content to make that happen.
Then you either use the same metadata for each image or modify it so they have a unique number.
Related
I followed this below guide which worked perfectly apart from having to update the stack to use node 12 instead of 8.
https://aws.amazon.com/blogs/networking-and-content-delivery/serverless-video-on-demand-vod-workflow/
It takes an uploaded video file from an S3 bucket and runs it through the AWS media converter and outputs HLS video files for on-demand videos.
Issue:
The lambda function converts the file into 6 different bitrates, I only want 3. So I went into the lambda function and edited the config file so there are only 3 presets not 6.
Yet when I encode more videos it is always producing all 6 presets.
It's as if the lambda file is not saving, or more likely I'm not understanding how the whole process works.
How do I get this to only encode 3 HLS bitrates?
You have to go to AWS mediaconvert console. Once there, go to job templates. There's should be 3, one for 2160p source files, and two more, one for 1080p and 720p. You have to go into each one and you'll see a list of outputs. You should see HLS with 6 outputs. Click on HLS and at the top of the page, click update, then you can remove the unwanted outputs. You should click export JSON at the top of the page just incase you make a mistake and want to roll back without deploying the whole stack again.
I'm trying to find a way to sort videos by their bytes/second (b/s) ratio. I don't mean the b/s rates which one can set when rendering videos, but the actual "how big is this file" divided by "how long is the video" ratio.
The videos are in different folders (all contained in one parent folder) and I don't want to change their location with the sorting. I want a descending list with the filename, optionally the path to that file and the ratio of b/s; commandline-output would be fine.
Is there any way to do this in Windows natively? I assume there isn't, so my question is rather: How would one do that? My best guess is to try to write a .bat script for that but there might also be programs for something like that already.
Ok, this seems quite easy to do by getting the bitrate of the files via ffmpeg
FFMPEG - batch extracting media duration and writing to a text file
I'm developing a system using ffmpeg to store some ip camera videos.
i'm using the segmentation command for store each 5 minutes a video for camera.
I have a wpf view where i can search historycal videos by dates. In this case i use the ffmpeg command concat to generate a video with the desire duration.
All this work excelent, my question is: it's possible concatenate the current file of the segmentation? i need for example, make a serch from the X date to the current time, but the last file is not generated yet by the ffmpeg. when i concatenate the files, the last one is not showing because is not finish the segment.
I hope someone can give me some guidance on what I can do.
Some video formats can always be playable during the build process. That is, you can make a copy of the unfinished segmentation directly and use it to merge.
I suggest you use flv or ts format to do this. mp4 is not supported. Also note that there is a delay from encoding to actually writing to the disk.
I'm not sure if direct copy will cause some data problems at the end of the segmentation file, but ffmpeg will ignore this part of the data during the merge process, so the merged video should be fine.
Now I used C language and ffmpeg realize a multiplex real-time audio and video to MP4 files of the program and everything works fine, but when in the process of reuse of sudden power failure, the recording is MP4 file is damaged, VLC can not play this file.
I think reason is no call to write the trailer function av_write_trailer , causing index and time stamp information lost, I use araxis merge tool compared the successful call av_write_trailer function of file and a no av_write_trailer to call the damaged files and found two different points:
1. Damaged files in the file header box number value not right
2. The damaged file no end of file.
Now I want to repair after power on my program can automatically repair the damaged files, in Google did not find effective methods.
my train of thought is in the normal recording process saves per second a damaged file is missing two information: box number and end of file, save it to a local file, when writing the MP4 file integrity delete this file after, if power off damaged, then in the next power, read the file and the corresponding information to write the damaged files corresponding position to. But now the problem is that I don't know how to save the number of box and the end of the file, I this is feasible? If possible, what should I do? Looking forward to your reply!
The main cause of MP4 file damage is due to header or trailer not written properly on the file , then , whole file become a junk data. Thus none of the media player able to play the broken mp4 file.
So,
First , broken file has to be repaired before playing the file.
there are some applications and tricks available to repair and get the data back
links are given below :
http://grauonline.de/cms2/?page_id=5 (Windows / Mac)(paid :( )
https://github.com/ponchio/untrunc (Linux based OS)(ofcourse,free!!!)
Second, Manually repairing the corrupt file using HEX editor.
Logic behind this hack :
This hack requires a broken mp4 file and good video file where both videos are captured from the same camera .Also its size should be larger than the broken mp4 file.
Open both video file in any HEX editor. Copy trailer part from good video file to broken video file and save it!Done!!
Note : Always have a backup of video file.
follow these links for detailed informations :
http://janit.iki.fi/repair-corrupted-mp4-video/
https://www.lfs.net/forum/thread/45156-Repair-a-corrupt-mp4-file%3F
http://hackaday.com/2015/04/02/manual-data-recovery-with-a-hex-editor/
http://www.hexview.org/hex-repair-corrupt-file.html
Third, Even tough MP4 file has many advantages , this kind of error is unpredictable and difficult to handle it.
Thus , Using format such as MPG and AV_CODEC_ID_MPEG1VIDEO/AV_CODEC_ID_MPEG2VIDEO (FFMPEG) may help to avoid this kind of error. The mentioned MPG format does not require any header/trailer.if there is any sudden power failure MPG file can play the file whatever frames are stored so far.
Note : there are other formats and codec also available with this kind of properties.
As you may know, when you record a video on a windows phone, it is saved as a .mp4. I want to be able to access the video file (even if it's only stored in isolated storage for the app), and manipulate the pixel values for each frame.
I can't find anything that allows me to load a .mp4 into an app, then access the frames. I want to be able to save the manipulated video as .mp4 file as well, or be able to share it.
Has anyone figured out a good set of steps to do this?
My guess was to first load the .mp4 file into a Stream object. From here I don't know what exactly I can do, but I want to get it into a form where I can iterate through the frames, manipulate the pixels, then create a .mp4 with the audio again once the manipulation is completed.
I tried doing the exact same thing once. Unfortunately, there are no publicly available libraries that will help you with this. You will have to write your own code to do this.
The way to go about this would be to first read up on the storage format of mp4 and figure out how the frames are stored there. You can then read the mp4, extract the frames, modify them and stitch them back in the original format.
My biggest concern is that the hardware might not be powerful enough to accomplish this in a sufficiently small amount of time.