The hottest researchers developed real-world light

  • Detail

Researchers have developed Jisa 1517:1999, an experimental method that can recreate the watertightness of real-world lighting doors and windows, and reflective artificial intelligence

once bought carpets or fabrics on the Internet. I hope you can tell what it looks like in real life? Thanks to the researchers from the computer science and Artificial Intelligence Laboratory of Massachusetts Institute of Technology (CSAIL) and the French INRIA soph (1) desalination reverse osmosis membrane product IA Antipolis, you are still a step away from experiencing this

during the 2018 siggraph conference held in Vancouver today. 2. Before refueling, the team jointly launched the single image svbrdf capture and rendering perception depth network, which is a method to extract the texture, highlight and shadow of materials in photos and digitally reconstruct environmental lighting. Reflection

the researchers wrote: [v] is hinting It allows human beings to perceive the appearance of substances in a single picture. However, the bi-directional reflection distribution function to restore spatial changes - the function of four variables that define how light is reflected on opaque surfaces - comes from a single image based on such clues, which has posed a challenge to computer graphics researchers for decades. We solve the problem by training deep neural networks to automatically extract and understand these visual clues

researchers start with samples - many samples. They purchased a data set of more than 800 artist creation materials, and finally selected 155 high-quality suits from nine different categories (paint, plastic, leather, metal, wood, fabric, stone, ceramic tile, ground). After reserving about a dozen as test devices, they rendered them in a virtual scene to simulate the field of view (50 degrees) and flash of the camera

however, just training machine learning models is not enough to provide the possibility of reduction. Therefore, in order to enlarge the material data set, researchers used a cluster of 40 CPUs to mix and randomize their parameters. Eventually, they produced 200000 realistic shadows, a wide variety of materials

the next step is model training. The team designed a convolution neural network - a machine learning algorithm that roughly simulates the arrangement of neurons in the visual cortex - to predict four illumination maps corresponding to the normal of each pixel (the illumination value of each pixel on the rendered image), diffuse albedo (diffuse light reflected by the surface), specular albedo (mirror reflection of light waves) and specular roughness (reflected gloss)

in order to minimize the variability between map values, they developed a similarity measure to compare the rendering of the predicted map with the rendering of the ground live measurement. To ensure the consistency of the output image, they introduced a second machine learning model, which combines the global illumination extracted from each pixel (i.e. the light reflected from the surface) with local information - prompting researchers to write that information is exchanged back and forth between remote image areas

they conducted 400000 iterations of training on the network, and took 350 photos taken with iPhone Se and nexus 5x, which were cropped to be close to the field of view of the training data. result? The model performs quite well, successfully reproducing the true reflection of light on metal, plastic, wood, paint and other materials

unfortunately, it is not without its limitations. Hardware limitations limit it to images of 256 x 256 pixels, and it is difficult to reproduce the illumination and reflection of photos with low dynamic range. Nevertheless, the team still noticed that it was well summarized as a real photo, and if nothing unexpected, it showed that a single network could be trained to deal with a variety of materials

Copyright © 2011 JIN SHI