|
|
![]() |
||||||||||||||||||||
|
Best 4K Blu-ray Deals
|
Best Blu-ray Movie Deals, See All the Deals » |
Top deals |
New deals
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() $32.99 | ![]() $29.99 4 hrs ago
| ![]() $19.99 5 hrs ago
| ![]() $28.99 | ![]() $45.00 | ![]() $18.99 5 hrs ago
| ![]() $27.95 | ![]() $19.99 7 hrs ago
| ![]() $15.99 23 hrs ago
| ![]() $29.99 | ![]() $82.99 | ![]() $74.99 |
![]() |
#2161 |
Senior Member
|
![]()
I don't know if this is the right thread for this... But for all the talk of display manufacturers using AI/neural networks to upscale to 8K, do we have any detailed information on the techniques that studios use when they upscale a blu ray disc?
Some of the AI upscaling results I've seen online not related to TVs or movies have been extremely impressive, almost imperceptible from true resolution imagery. |
![]() |
![]() |
#2165 | ||
Blu-ray Samurai
|
![]() Quote:
I’m probably not explaining this in the most coherent fashion, but I’m pretty sure the Sony 4K Z9D that I have already does something similar, they just didn’t call it AI at the time. [edit] If anyone is interested, I found the video from CES 2019 thread in post 188, it's this: It's a short video, but zero in on the part starting around 0:26 . Last edited by gkolb; 01-23-2019 at 09:28 PM. Reason: added video from CES 2019 |
||
![]() |
![]() |
#2166 | ||
Senior Member
|
![]() Quote:
Because the results of the upscaling on a 4K TV, even the Z9D or the a9 LG are good, but they're MUCH closer to just basic upscaling than the results of REAL AI and neural network implementations... Check these links: https://deepsense.ai/using-deep-lear...er-resolution/ |
||
![]() |
![]() |
#2167 | |
Blu-ray Samurai
|
![]() Quote:
I'm not good enough to answer this, but I know it's in the main HDR thread. I'll throw on the beetlejuice-signal and hope you get an answer: Geoff D Geoff D Geoff D ![]() Last edited by gkolb; 01-23-2019 at 09:43 PM. Reason: spelling |
|
![]() |
![]() |
#2168 |
Blu-ray Emperor
|
![]()
Of all the shit that I pretend to know about, this exact procedure is not one of them (though they don't take the literal blu-ray disc and start with that, in case anyone was wondering, despite the joke above
![]() |
![]() |
Thanks given by: |
![]() |
#2169 |
Senior Member
|
![]()
You're quite resourceful Geoff, perhaps you could dig something up
![]() My entire point is that recent advances in image processing due to legitimate machine learning/AI can do such a great job I almost doubt anyone would be able to see the difference between 2K and 4K native images. Who knows, in a few years we may see all films upscaled to 8k or 16k using machine learning then downscaled back to 4K for the UHD blu and it would be a remarkably better image than just utilizing the 4K master to begin with. |
![]() |
![]() |
#2170 |
Blu-ray Knight
|
![]()
You could call it "better", but it wouldn't meet my definition. I'd rather have a lower value of actual resolution than a higher value of estimated resolution, no matter how "good" the estimated resolution's quality appears. I'm looking for representations of movies, not interpretations of them.
|
![]() |
![]() |
#2171 | |
Senior Member
|
![]() Quote:
I mean keep in mind this isn't like replacing guns with radios in a film, it's merely cleaning up an image to be sharper - without all of the bad side effects we've had in the past like aliasing, ringing, blurriness, etc. Another thing to remember is that in a world of fixed pixel displays, you're going to have to have the image upscaled regardless. Do you watch 1080p movies with black borders along all sides of the image on a 4K to maintain the true pixel count representation of the original image? If not, then you're getting the TV's interpretation of the image anyways - Why not be in favor of a smarter one then? |
|
![]() |
![]() |
#2172 | ||
Blu-ray Guru
|
![]() Quote:
For reference, here's an article from Steve Yedlin. Quote:
|
||
![]() |
![]() |
#2173 | |
Senior Member
|
![]() Quote:
"real" AI, not samsung marketing AI, is about using machine learning to see the patterns of what real high resolution imagery would look like and using that information to intelligently fill in the blanks. This type of computation just can't be done real time on a small chip inside the TV no matter what Samsung or Sony says about their new processing chips. |
|
![]() |
![]() |
#2174 | |
Blu-ray Guru
|
![]() Quote:
|
|
![]() |
![]() |
#2175 | |
Blu-ray Emperor
|
![]() Quote:
They scan/ingest the relevant film/digital data and then use their patented process to analyse each frame and discard all parts of the image that aren't temporally consistent from frame to frame e.g. dirt & grain, making sure that the processing is throttled back for scenes with rain or snow lest it confuse those picture elements with what needs removing. It then takes the detail from the most well endowed frame(s) and uses it to essentially rebuild the image in adjacent frames that lack for this detail. It's not physically uprezzing the spatial resolution from 2K to 4K or whatever, but nonetheless it can make the image look much sharper from a temporal standpoint. It still has to look like 24fps of course, and the option is there for the filmmakers to regrain/renoise the image to their specifications, but the impact of this detail enhancement can be remarkable. It's fallen out of favour in recent years as it's still very expensive, and what was once deemed necessary to "supercharge" SD and HD images becomes a moo point when you've got studios doing their own 4K/6K/8K scans all over the place and doing their own downsampling to 2K and 4K, thus retaining dat sweet sweet 4K detail. But as it's commonly held that 35mm tops out at roughly 4K's worth of information (slow camera neg shot on the best spherical glass using test patterns) then if it's going to gain any ground in the 8K world it's going to need something extra, so some folks at HBO did some tests with 10K (!) scans of 35mm compared to 4K scans. They couldn't see any major differences but when they handed the 10K to Lowry Digital to process then it brought out more detail, albeit still subtly, compared to the 4K scan. http://www.etcentric.org/the-reel-th...scans-of-film/ (I'd love to see what a Lowry processed 2K upscale vs the same scan in native 4K would look like!) Phew, I managed to waffle out some paragraphs after all. I was sweating there for a second, I felt like Chnandler Bong once his third nipple was removed: "It's gone...the source of all my powers!". |
|
![]() |
![]() |
#2177 |
Blu-ray Emperor
|
![]()
Yes, but that post has got nothing to do with what Alex was actually asking at the top! It's a lot of noise, no real answers, fairly typical for me. I still don't know what methods/systems they use to upscale a 2K DI into 4K HDR, at what stage the upscale is done (do they regrade then upscale, or upscale then regrade to make sure that the upscaled grain/noise doesn't do anything funky in HDR?) etc etc. Most processes relating to finishing and grading are out there in various books and articles, I even found that precis about how Sony rebuild their stuff into genuine 4K, but this upscaling lark is virtually anonymous. You'll need an actual expert to chime in on this one.
|
![]() |
![]() |
#2178 |
Senior Member
|
![]()
Well it doesn't answer the question but is certainly relevant. I can say one of the main benefits of AI/neural networking/machine learning is that the pattern recognition continuously gets better as more data is fed into it, which can save costs into development. Of course, another huge benefit is that one can leverage cloud based computing platforms which would be absolutely massive for costs savings...
I don't know what processing bandwidth studio houses have reserved for the upscaling process to get a 2K DI onto a UHD Blu Ray Disc... buuuuut if I had to guess they could probably leverage much more powerful rendering capabilities at a lower cost by renting processing from cloud based companies like nVidia with much more specialized hardware. I mean some of the stuff nVidia has shown with regards to image processing is unreal. |
![]() |
![]() |
#2179 | |
Active Member
Jul 2018
|
![]() Quote:
The thing is, a neural network has to be trained on a dataset (training sources) which can be applicable to sources other than what it was trained with. So for example let's say you have a lower resolution image that lacks a higher resolution version, obviously if you had a higher resolution to begin with there would be no need to upscale it to begin with, so instead to teach the neural network it would be trained with a dataset of similar enough images that are higher resolution and use the resulting trained model to upscale the lower resolution image. The results for images at least can be astounding, though it can also can vary in consistency. That said it is still like Doctorossi mentioned an interpretation, filling in the gaps in information with additional details, even if in ideal (let's say, cherry-picked) examples can seem near perfect. For video though I haven't seen anywhere near as much consistency as where still images are currently for upscaling using neural networks, for one since frame-to-frame changes are needed to be taken into account, though I'm sure with time it will be improve greatly. Last edited by Pans; 01-27-2019 at 03:03 PM. |
|
![]() |
![]() |
#2180 |
Blu-ray Emperor
|
![]()
I'm not closed off to the idea of such AI-led upsampling, but there's a much simpler way to achieve such things: actually mastering in 4K/8K to begin with!
![]() And yes, as with Lowry's own 'dumb' process mistaking the rain in Citizen Kane for dirt and removing it, the AI would absolutely need to be taught how to properly handle the entire gamut of images and textures that can be capture on moving picatures. But as you say, teaching it to upscale by using a higher-rez source vs the lower-rez downsample sounds like the way to go. |
![]() |
|
|
![]() |
![]() |
|
|