Who else is excited for the Mavo Edge?!
June 21, 2020 at 4:50 pm #5762
I’m looking forward to seeing some footage with it. Everything looks amazing, the only downside is no cdng/raw recording.
It would be a pity if the footage wasn’t able to be graded/processed in Davinci Resolve. I’m aware prores raw being available on a good amount of platforms but I don’t think that was a big enough trade off to leave resolve.
Anyone have any insights on this?June 23, 2020 at 5:20 am #5763
Very excited for the camera as well – pre-ordered it. Not sure what the status of ProRes raw will be. I’ve used regular ProRes in Davinci, but admit it’s not the same amount of flexibility as working with Raw.June 26, 2020 at 5:02 am #5769
Very excited — I’ve pre-ordered it too! They’ve been quiet though….July 7, 2020 at 1:06 pm #5772
Hey Guys, not to spoil this RAW party! But lets be real there are no image quality advantages over Prores444 when working with raw. The only thing that Raw does about raw is being smaller potentially than prores444 while staying around the ‘visual losless’ mark. Another downside is that you cannot downsample raw in camera (yes the sigma FP does scaling which is terrible) , For me the sweet spot of the edge will be Oversampled 4k with a full sensor read out. If anyone has questions about the upcoming Mavo edge, please ask, and I will promise I will get back!July 30, 2020 at 5:18 am #5786
I have a maybe simple question. If file size and post-processing time wasn’t an issue, which would yield better results: 1. Shooting 8K ProRes Raw and downsizing it to 4k in post, 2. Shooting 4K oversampled @ ProRes, or 3. shooting 4K ProRes Raw (cropped sensor)? Is downscaling 8k to 4k in post equivalent to shooting 4k oversampled?? Thanks!
August 10, 2020 at 2:52 pm #5791
- This reply was modified 1 month, 2 weeks ago by Garen Mirzaian.
Hi @garenmirzaian – I’m pretty sure downsizing 8k to 4k gives you the advantage of greater bit depth. I’m sorry but I can’t find an article to reference but the idea is that 2 pixels will be combined into 1 pixel in your downscaled image. This means that there will most likely be a slight gradation in color or brightness between them that will be calculated into the final image giving smoother transitions between colors than would have been captured in an image with less starting resolution. As the dynamic range is “baked in” to the sensor the addition of resolution will not enable the camera to capture more brightness range than it has already captured, but in downscaling you will be adding more gradations in color. Make sense?
I can’t find the article (it was probably on a forum) where someone told me this but here’s a demonstration of bit depth. The difference between the last two images (8 and 32-bit) is noticeable, and a smaller leap than this is what I think will happen when you downscale from 8k to 4k. The camera is naturally 10-bit in ProRes (although the PR Raw might be 12-bit??). My guess is that you’re adding 1-2 additional bits of color information in that downscale. But I’d love for someone who really knows this stuff to chime in.August 13, 2020 at 5:57 am #5793
Hi all! I’m new here – not sure if this is the right thread to ask, but saw it’s about the Edge, so apologies if I am posting out of order!
I’ve been reading a lot about the Kinefinity line up over the past year and have been contemplating moving to their system with the forthcoming Edge, as the output from their other cameras has looked very impressive. Anyone know if there are any updates beyond the announcement – I can’t seem to find any further updates. Thanks in advance!August 14, 2020 at 4:40 am #5794
Hey Raafi, as soon as you get it you have to do one of those cool “box opening” sequences!August 14, 2020 at 5:13 am #5795September 9, 2020 at 9:53 pm #5854
Hey Raafi, Oversampling is actually a quite interesting procedure as you already noticed.
A Bayern sensor bit depth is very hard to understand. For luminance we founded the log curve, most cameras dont have higher adc than 12 bit which basically doesn’t lead to a higher patch range than 14-15 and effective dr of 11.8 or something with a proper log curve and high enough bitrate this can easily be fitted within a 10 bit file especially taking in consideration that most cameras like kinefinity don’t have a higher effective dynamic range than 12 stops . Chroma sampling is a completely different story and is determined by the physical limits of the bayern color array filters their inherent noise and the issue with debayering. Please note that a red color array will still pass a bit green and barely blue, if it had to hold off green completely it would be great but this would greatly impose higher costs and lower sensitivity. Since each channel has its own noise level, it actually distinguishes far fewer than the theoretical 12 bits of RAW data. Once you combine these pixels by means of oversampling the color accuracy is greatly improved. If you oversample In-camera a 12 bit codec is needed but if you want to shoot all pixels thus full sensor readout without binning its better to go with 10 bit and let your post program like scratch or resolve do the math and not waste your precious harddisks on something thats not notable . I wish i had more romantic news than this. But these are the facts. A camera like the alexa which combines pixel readout to get higher dynamic range also benefit from this on a level of full pixel readout in terms of color fidelity . Though their 16 bit raw is a bit of a overkill compared to 12 bit prores. So if your a arri shooter i recommend going for 12 bit at all times and as you see sometimes a night shoot at higher iso can benefit a bit from arri raw in terms of shadow noise structures but this is mostly due to higher bitrates, more noise means more variations thus compression will kick in harder.
NOW BACK TO THE STORY of oversampling. A sensor that samples at a very high rate (like a Blackmagic ursa 12K) wont need a dense OLPF, aliasing effects will be much lower, also when you oversample. Tha Mavo Edge will always provoke the ‘I dont need 8k’ discussion. But that discussion is for dummies. If one would be able to make a 20 K sensor which can pixel bin (thus not scale or line scki) to 2k i would sign for it, issue though is that the dr will be low, because pixel density is very small etc. But technically both color will improve and aliasing effects will be gone. When a camera is able to deliver us 4k iam happy. As we know 2.5k is around the max we can see when sitting in the middle of a average cinema, 4k when sitting front row (pan and scan) and SD when being on a back seat. But thats not the discussion, a camera needs to sample on a very high resoltion, to be able to get past the physical limits of the bayern sensor, both color wise and spatial resolution wise! The arri alexa is known to be very soft in terms of resoltion (mft charts), thats due to the very dense olpf, because the pixels are simply too big, so eed [pixel blur otherwise we will get Moire/aliasing effects straight away. Not that filmmakers care about sharpness but i would much rather soften my images in post, and have less olpf smear on lenses like Leica m 0.8 etc. Long story, short. If anyone wants to discuss oversampling etc, Iam here ! One side note, theoratically a camera should just record its highes t resolution, and do the oversample in post in a good nle or post program (davincy resolve), but this is not the case with the edge, I know that this sensor has an ‘on sensor pixel binnin 2X2 mode (4 becomes one) which is due to a couple reasons better than doing that after the file is recorded, and the pixels have gone through many passes in camera (noise filterings etc).September 10, 2020 at 7:53 pm #5855
Any new updates on the release of the Edge?
You must be logged in to reply to this topic.