User:Sae-Woon Ryu

Dr. Sae-Woon Ryu (유세운,柳世運 ) (E-mail: seun.ryu@samsung.com, ryuseun@gmail.com, ryuseun@mr.hanyang.ac.kr) Ph.D. Seun Ryu SAMSUNG Electronics

SOC Development Team, SYSTEM LSI Business, Device Solutions

1-1 ,Samsungjeonja-ro, Hwaseong-si, Gyeonggi-do, Republic of Korea

mobile: +82-10-8801-5738, phone +1-509-592-9095

seun.ryu@samsung.com, ryuseun@gmail.com, https://sites.google.com/site/ryuseun/home

Research and Working Experience

SAMSUNG Electronics (March 2010 ~ currently)

- Working Area: Image Processing Algorithm Evaluation, Optimization and Modeling

- Position: Senior engineer

- ISP algorithm Modeling project for consumer mobile chips:

In this project, there are two big projects. First project is an APPLE A4, A5 development project.. Second is a SAMSUNG Exynos 4, 5, 7 development project. In the first project, Dr. Ryu has the chip validation and workaround solution development of ISP, Video Codec, Scaler. In the second project, Dr. Ryu has the algorithm development and verification for AutoFocus (AF), Image Signal Processor (ISP), SCALER, Optical Distortion Correction (ODC). Additionally, Dr. Ryu developed Stereo Convergence and Dual camera application. The detail descriptions are as follows.

Auto Focus (AF): there are two technologies with contrast detection (CD) and phase detection (PD). The contrast detection autofocus (CDAF) is achieved by measuring contrast within a sensor field, through the lens. The intensity difference between adjacent pixels of the sensor naturally increases with correct image focus. The optical system can thereby be adjusted until the maximum contrast is detected. Contrast detection method that defines the focus position for the maximum contrast as the full focus. In this CDAF, Dr. Ryu developed a noble hybrid filter Design for Contrast based Auto Focusing Algorithm. Because his hybrid filter is consisted of FIR and IIR filters and has a strong advantage of small tap number and high quality. It is able to make a low cost HW. The phase detection autofocus (PDAF) is used phase detection pixel on the camera sensor. The PD pixel is consisted of dual pixels and is covered by micro lens and half shield strip. So the PD pixels get a half-looking one way and half looking the other. However, the PD pixel is only captured phase signal, and it is one of bad pixel as a camera sensor. So, the PD pixel distribution is important. Dr. Ryu developed a phase detection algorithm and designed the optimized distribution of PD detector on the mobile image sensor.

(CDAF reference: http://graphics.stanford.edu/courses/cs178/applets/autofocusCD.html)

(PDAF reference: http://www.dpreview.com/articles/2151234617/fujifilmpd)

Image Signal Processor (ISP): An image signal processor called media processor, is a specialized digital signal processor (DSP) used for image processing in digital cameras, mobile phones or other devices. Dr. Ryu did ISP sub-block evaluation and algorithm optimization for a long time. Bayer transformation; the photodiodes employed in an image sensor are color-blind by nature: they can only record shades of grey. To get color into the picture, they are covered with different color filters: red, green and blue (RGB) according to the pattern designated by the Bayer filter - named after its inventor. As each photodiode records the color information for exactly one pixel of the image, without an image processor there would be a green pixel next to each red and blue pixel. (Actually, with most sensors there are two green for each blue and red diodes.) This process, however, is quite complex and involves a number of different operations. Its quality depends largely on the effectiveness of the algorithms applied to the raw data coming from the sensor. The mathematically manipulated data becomes the photo file recorded. Dr. Ryu developed and improved the color space conversion between Bayer and YUV convertor (Adobe RGB, sRGB, etc). Demosaicing; as stated above, the image processor evaluates the color and brightness data of a given pixel, compares them with the data from neighboring pixels and then uses a demosaicing algorithm to produce an appropriate colour and brightness value for the pixel. The image processor also assesses the whole picture to guess at the correct distribution of contrast. By adjusting the gamma value (heightening or lowering the contrast range of an image's mid-tones) subtle tonal gradations, such as in human skin or the blue of the sky, become much more realistic. Noise reduction; Noise is a phenomenon found in any electronic circuitry. In digital photography its effect is often visible as random spots of obviously wrong colour in an otherwise smoothly-coloured area. Noise increases with temperature and exposure times. When higher ISO settings are chosen the electronic signal in the image sensor is amplified, which at the same time increases the noise level, leading to a lower signal-to-noise ratio. The image processor attempts to separate the noise from the image information and to remove it. This can be quite a challenge, as the image may contain areas with fine textures which, if treated as noise, may lose some of their definition. Image sharpening; as the color and brightness values for each pixel are interpolated some image softening is applied to even out any fuzziness that has occurred. To preserve the impression of depth, clarity and fine details, the image processor must sharpen edges and contours. It therefore must detect edges correctly and reproduce them smoothly and without over-sharpening. Dr. Ryu evaluates the ISP sub-block, and modeled the subsystem pipeline and simulated the Demosaicing, Noise reduction, Sharpening algorithm, and tuned the workaround development for low light condition.

Scaler: image scaling is the process of resizing a digital image. Scaling is a non-trivial process that involves a trade-off between efficiency, smoothness and sharpness. With bitmap graphics, as the size of an image is reduced or enlarged, the pixels that form the image become increasingly visible, making the image appear "soft" if pixels are averaged, or jagged if not. With vector graphics the trade-off may be in processing power for re-rendering the image, which may be noticeable as slow re-rendering with still graphics, or slower frame rate and frame skipping in computer animation. Apart from fitting a smaller display area, image size is most commonly decreased (or subsampled or downsampled) in order to produce thumbnails. Enlarging an image (upsampling or interpolating) is generally common for making smaller imagery fit a bigger screen in fullscreen mode, for example. In “zooming” a bitmap image, it is not possible to discover any more information in the image than already exists, and image quality inevitably suffers. However, there are several methods of increasing the number of pixels that an image contains, which evens out the appearance of the original pixels. Dr. Ryu developed and improved the FIR Filter based Poly-Phase image interpolator. Additionally, he developed and evaluated the Bilinear and Bi-cubic Image Scaler C-model. His latest research result is the cascade heterogeneous multi-Scaler algorithm. It is used high performance mobile Application Processor (AP). His algorithm is published by patent. Dr. Ryu focused on advanced scaler algorithm development to get high qualtiy scaling image and to reduce HW cost or power. Dr. Ryu developed two major Scaler. First Scaler is the polyphase and bilinear based hybrid multistage scaler. The goal of algorithm was the expand of scale ratio. In general case, because the scale-ration is dependent of HW cost. So, each scaler component has limited for scale-down ratio. The proposed hybrid scaler used high performance and medium performance scaler. High performance scaler deal with first scale-down processing and medium performance scaler deal with the rest scaledown processing. The two steps processing can keep the down-scaling image quality, but the HW cost can be reduced. Second Scaler is an Adaptive Filter based Scaler. The goal of algorithm was the expanded scale-ratio within low power performance. The proposed algorithm is used multiple scaling filters by consider the frequency component. In the high frequency, it is needed expanded filter and high power computing, however low frequency component is not needed high performance. So, it can be reduced power by selective filter.

Optical Distortion Correction (ODC): In geometric optics, distortion is a deviation from rectilinear projection, a projection in which straight lines in a scene remain straight in an image. It is a form of optical aberration. Radial distortion; Although distortion can be irregular or follow many patterns, the most commonly encountered distortions are radially symmetric, or approximately so, arising from the symmetry of a photographic lens. Dr. Ryu found the general ODC problem which the lens distortion is variance for each distance between camera and the object. It is serious problem of mobile camera, many company tried to fix this problem. Dr. Ryu developed and improved camera optical (Lens Shading, Geometrical) distortion compensation algorithm. His researches are measurement of optical distortion and tuning the correction. His research result is published by patent. He proposed a depth information based lens distortion correction method by using perspective projection matrix. His technology is helpful to get high quality image capturing without geometrical distion, and It is useful to correct the mobile AP.

Stereo Convergence: The convergence is the angle formed by your eyes and the observed object. The higher the angle value is, the nearer the observed object is to your two eyes, and vice versa. Therefore, when the convergence is fixed, any object between you and the convergence point will be closer to you, while the object beyond the convergence point will be farther away from you. Please note that if the Convergence is higher than 6 degrees, which means the object is too close to you, then your eyes feel uneasy. On the contrary, when the value is too small, which means the object is too far, the stereo sensation will be lost. Dr. Ryu developed the stereo convergence algorithm development.

Dual camera/ Stereoscopic:A stereo camera is a type of camera with two or more lenses with a separate image sensor or film frame for each lens. This allows the camera to simulate human binocular vision, and therefore gives it the ability to capture three-dimensional images, a process known as stereo photography. Stereo cameras may be used for making stereo-views and 3D pictures for movies, or for range imaging. Dr. Ryu developed the dual camera application development such as wide and telephoto sensor based heterogeneous dual camera digital zoom algorithm. Dr. Ryu developed the 3D effect technology and layered panorama generation method from the dual camera3D. In his Multi-Scaler and blender based 3D effect generation technology, it used image segmentation and the segment image is scaled by differential scaling ratio. Even processing has low cost computing power, its result is shown a reasonable perspective effect. Thus, that is able to make a real-time 3d scene capture and interactive 3d effect picture. In his another application, the depth information based multi-layer panorama technology, it captured the panorama scene, make an image segmentation, and layer generates by using the depth information based segmentation clustering. His technology can make a 3d effect panorama capture and interactive 3d effect panorama.

His latest research results are publishing by patents.

Pacific Northwest National Laboratory (PNNL), Richland WA USA (July 2011~july 2012)

- Research Area: Stochastic Image Reconstruction and Mass spectrometry Image Processing

- Position: Post Doc. Association

-Stochastic 2D/3D Reconstruction: large data computation is critical in the success to create a statistical stable representative volume element with large domain and high resolution. Stochastic image reconstruction from statistical information usually is so time consuming that most research works only demonstrate a simulated micrograph with size around 100100 pixels. The challenge in image reconstruction is to increase efficiency, make it possible to realize a high resolution large domain image within a short time. Traditional image reconstruction cannot satisfy the challenge from the large data computation: large domain and high resolution. A new efficient algorithm is required to make the large data computation practical. In the realistic 3D modeling, chemical image information needed in characterization and performance prediction is large data sets with high resolution due to the inherent complexity and heterogeneity. In most studies the structure information is usually captured from difference views, at various positions, with multiple length scales, using multimodality chemical imaging instrumentation. In most materials, the microstructure can be characterized only statistically due to the inherent complexity and heterogeneity. Generally simulated annealing has been widely used for various combinatorial and other optimization problems. In stochastic image reconstruction, ad hoc and cascade simulated annealing have been widely utilized. Dr. Ryu proposed a novel adaptive multiple super fast simulated annealing (AMSFSA) for 2D/3D microstructure reconstruction. With high converging efficiency toward optimal results, the new algorithm is adaptive and multithread.

-Mass spectrometry Image Processing: Development of high-resolution mass spectrometry (MS) imaging faces challenges from processing of large datasets. Traditional data analysis algorithms and most commercial MS software packages cannot meet the urgent need for analyzing large data sets from high-resolution mass spectrometry. Using commercial software, the data file size is limited to the range of 100MB while the whole data set of one high-resolution MS scan reaches 40GB easily. On the data analysis side, speed was also intolerable. To keep up with the fast pace of instrumentation improvement, image data processing algorithms must be innovated and implemented. Most commercial software package cannot analysis or visualize files with size larger than 100MB. Except file size, there are also other limitations, such as the maximum intensity, signal to noise contrast, number of scan per line, m/Z step size, etc. Two –prone approach is being developed in our suite of software, one is to use large shared memory while the other one is to use new algorithms to query and only load the required data into memory. MSI Quickview has been developed to provide an efficient method to process, visualize, query, and analyze spatial mass spectrometry data with an easy-to-use graphical user interface. The image processing workflow is being streamlined and automated using the Collaborative Analytical Toolbox (CAT) and the MeDICI middleware for Data-Intensive Computing. Visualization softwares, including Paraview and Visko are being incorporated into the workflow to allow interactive high resolution data queries.

Korea Institute of Science and Technology (KIST), Seoul, Korea (Jun 2002~Feb. 2005)

- Research Area: Computer Vision and Graphics

- Position: Research Assistant

- 3D Cyber Museum Technology: In this project, Dr. Ryu worked three key technology to realized 3D cyber museum system. the three key technologies are Image Based Modeling and Rendering (IBMR), Level of Detail (LOD), and Geographic information system(GIS). The detail descriptionsareas follows.

First, Image Based Modeling and Rendering (IBMR):In computer graphics and computer vision, image-based modeling and rendering (IBMR) methods rely on a set of two-dimensional images of a scene to generate a three-dimensional model and then render some novel views of this scene.The traditional approach of computer graphics has been used to create a geometric model in 3D and try to reproject it onto a two-dimensional image. Computer vision, conversely, is mostly focused on detecting, grouping, and extracting features (edges, faces, etc.) present in a given picture and then trying to interpret them as three-dimensional clues. Image-based modeling and rendering allows the use of multiple two-dimensional images in order to generate directly novel two-dimensional images, skipping the manual modeling stage.

Second, Level of Detail (LOD): In computer graphics, accounting for level of detail involves decreasing the complexity of a 3D object representation as it moves away from the viewer or according to other metrics such as object importance, viewpoint-relative speed or position. Level of detail techniques increase the efficiency of rendering by decreasing the workload on graphics pipeline stages, usually vertex transformations. The reduced visual quality of the model is often unnoticed because of the small effect on object appearance when distant or moving fast.

Third, Geographic information system(GIS): A geographic information system (GIS) is a system designed to capture, store, manipulate, analyze, manage, and present all types of spatial or geographical data. The acronym GIS is sometimes used for geographical information science or geospatial information studies to refer to the academic discipline or career of working with geographic information systems and is a large domain within the broader academic discipline of Geoinformatics. What goes beyond a GIS is a spatial data infrastructure, a concept that has no such restrictive boundaries. In a general sense, the term describes any information system that integrates, stores, edits, analyzes, shares, and displays geographic information. GIS applications are tools that allow users to create interactive queries (user-created searches), analyze spatial information, edit data in maps, and present the results of all these operations. Geographic information science is the science underlying geographic concepts, applications, and systems.

Hanyang University (Ph.D. course) (Mar. 2005~Aug.2010)

- Research Area: Computer Vision, Graphics and Parallel Computing

- 3D Modeling/ Image-Based Lighting (IBL) /AR Technology :

In this project, Dr. Ryu researched augmented reality (AR), Relighting, and Multi-View Synthesis for many useful applications. His application is applied to medical surgery system and telecommunication system, etc. His research results will be helpful for healthcare, and telecommunication industry. the reasons of benefic are as follows.

First, In the Augmented Realty (AR) technology, he realized DirectAR and 3D Occlusion AR. DirectAR mean what it is projected graphical image on the patent body directly. The DirectAR technology is combined with special medical system which is named by Biplane Fluoroscopy (=Biplane X-Ray) System. Dr. Ryu researched 3D calibration of Biplane Fluoroscopy, and make a SW tool which calculates 3D target of patient information. The AR system projects the graphical image considering the patient body shape. So, Doctor can received the useful surgery guide from DirectAR sytem. 3D Occlusion AR mean what it is shown occlusion effect. OcclusionAR is shown the reasonable occlusion of virtually graphical image considering depth information between each foreground and background. User can feel the realistic effect of virtual object, because virtual object could be occluded such as many foreground object.

Second, in the Image-based lighting (IBL) technology, Dr. Ryu proposed a real-time geometry modeling method to shade images lit by a field-capturing system using minimized light to relight the human face. This proposed method is based on the Phong's reflection model and a 3D surface normal vector estimated in real-time using inverse ray tracing. Dr. Ryu utilized a real-time iridescence texture based relighting method, derived from a simplified anisotropic iridescence bidirectional reflectance distribution function (BRDF), which is based on spatial spectrum distribution filtering (SSDF). The anisotropic iridescence BRDF takes into consideration that the object material property is not homogeneous. In this dissertation, the iridescence BRDF is identified as a 7D spatial spectrum varying BRDF (7D SSVBRDF). SSDF is a unique algorithm based on a spectrum distribution of reflection beam patterns. The two proposed methods enable more realistic and efficient relighting for various natural objects. Valid experimental relighting results, using general human faces and biological iridescence objects, were verified for this study. And to render real-time processing, a multi-material texturing pipeline on the programmable GPU framework is designed. The diffuse, geometric surface normal vector and SSDF based iridescence textures are rendered on the proposed multi-material texturing pipeline. The proposed method enables efficient image processing without bottleneck. Dr. Ryu proposed human face relighting results show a real-time 3D shape modeling and relighting with environment. The iridescence relighting results show a variety of realtime-varying spectrum distribution, which is affected by dynamic environment. In conclusion, the proposed real-time relighting method can be widely used in commercial and realistic (image) contents production system. Dr. Ryu realized real-time IBL system which is used a minimum device. Because this technology give user immersive tangible effect, it will be used the next generation telecommunication.

Third, in the Multi-View Synthesis technology, it is used the glassless 3D display, because it is required multiple images taken from different viewpoints to show a scene.Thus, generating such a large number of viewpoint images effectively is emerging as a key technique in 3D video technology. Image-based view synthesis is a technique of generating required virtual viewpoint images using a limited number of views and depth maps.He proposed an algorithm to compute virtual views much faster by using multi-texture image structure of Graphics Processing Unit(GPU). It is demonstrated the effectiveness of our algorithm for fast view synthesis through a variety of experiments with real data.Specifically, he used CUDA (by NVIDIA) to control GPU device. For increasing the processing speed, we adapted allthe processes for the view synthesis to single instruction multiple data (SIMD) structure that is a main feature of CUDA,maximized the use of the high-speed memories on GPU device, and optimized the implementation. As a result, it is synthesized 9 intermediate view images with the size of 720 by 480 pixels within 0.128 second.

▶ Publications and Archevement

- Conference Paper (19)

[c1] Sae-WoonRyu, Yong-Moo Kwon, Jong-Il Park, “Cyber Museum Technology by Using 3D Geographic Information,” Proceedings of IEEK summer conference, pp. 1519-1522, 2003. (Domestic)

[c2] Sae-WoonRyu, Yong-Moo Kwon, Jong-Il Park, “Real-Time Interactive Image 3D Effect Technology,” Proceedings of IEEK summer conference, pp. 1523-1526, 2003. (Domestic)

[c3] Yong-Moo Kwon, Jie-Eun Hwang, Tae-Sung Lee, Min-Jeong Lee, Jai-Kyung Suhl, and Sae-WoonRyu, "Toward the synchronized experiences between real and virtual museum." APAN 2003 Conference, Vol. 5. 2003. (International)

[c4] Sae-WoonRyu, Yong-Moo Kwon, Jong-Il Park, "Layered Panorama using Depth Information," Proceedings of HCI(Human Computer Interaction) Korea, Volume 1, pp.660-666, 2004. (Domestic)

[c5] Sae-WoonRyu, Chang-Yung Bang, Jong-Il Park, Sang Hwa Lee, Sang-Yup Lee, Sang ChulAhn, " A Study of Normal Map Extraction and Lighting Technology for Real-time Image Based Lighting," Proceedings of HCI(Human Computer Interaction) Korea, Volume 1, pp.1031-1036, February, 2007. (Domestic)

[c6] Sae-WoonRyu, Jong-Il Park, Sang Hwa Lee, Sang-Yup Lee, Sang ChulAhn, "Real-time Relighting Technology for Tangible Teleconference System," Proceedings of Korean Society of Broadcast Engineers, November, 2007. (Domestic)

[c7] Sae-WoonRyu, Sang Hwa Lee, Jong-Il Park, " Real-time Video Based Relighting Technology for Moving Object," Proceedings of HCI(Human Computer Interaction) Korea, Volume 1, pp.433-438, February, 2008. (Domestic)

[c8] Sae-WoonRyu, Jong-Il Park, " Multi-Normal Texture for Iridescences Rendering," Proceedings of Image Processing and Image Understanding, February, 2009. (Domestic)

[c9] Sae-WoonRyu, Sang Hwa Lee, and Jong-Il Park, "Video Architecture and Real-Time Lighting Technology for Tangible Teleconference," Proceedings of IEEE International Symposium on Consumer Electronics (ISCE'08), April 14-16, 2008. (International)

[c10] Sae-WoonRyu, Sang Hwa Lee, Sang-ChulAhn, and Jong-Il Park, "Real-Time Video Based Relighting Technology for Moving Object," Proceedings of Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV'08), pp.480-485, January 23-26, 2008. (International)

[c11] Sae-WoonRyu, Sang Hwa Lee, and Jong-Il Park, "Real-Time Image-Based Relighting for Tangible Teleconference," Proceedings of Joint Workshop between HYU and BUPT, October 10-12, 2008. (International)

[c12] Ji-Youn Choi, Sae-WoonRyu, and Jong-Il Park, “Fast Multi-View Synthesis with Duplex Texture,” Proceedings of International Conference on 3D Systems and Applications (3DSA’10), pp. 93-96, 2010. (International)

[c13] Ji-Youn Choi, Sae-WoonRyu, Hong-Chang Shin, and Jong-Il Park, “Real-Time View Synthesis System with Multi-Texture Structure of GPU,” Proceedings of IEEE International Conference on Consumer Electronics (ICCE’10), pp. 171-172, Jan. 9-13, 2010. (International)

[c14] Ji-Youn Choi, Sae-WoonRyu, Jong-Il Park, “Multi-view Fast Video Synthesis by using Dual-texture mapping of Multiple Reference Image,” Proceedings of Image Processing and Image Understanding, 2010. (Domestic)

[c15] Ji-Youn Choi, Sae-WoonRyu, Hong-Chang Shin, Jong-Il Park, “Fast Multi-View Synthesis Using Duplex Foward Mapping and Parallel Processing,” Journal of the Korean Institute of Communications and Information Sciences, Vol.34, Np.11, pp. 1311-1318, 2009. (Domestic)

[c16] Ji-Youn Choi, Sae-WoonRyu, Hong-Chang Shin, Jong-Il Park, “Sequentially through parallel processin of Foward Mapping for Multi-view Fast Video Synthesis,” Proceedings of KSPC, Vol.22, No.1, pp. 291-292, 2009. (Domestic)

[c17] Sae-WoonRyu, Jae-Hyek Han, JaehaJeong, Sang Hwa Lee and Jong-Il Park, "Real-Time Occlusion Culling for Augmented Reality," Proceedings of Joint Workshop on Frontiers of Computer Vision (FCV'10), pp. 498-503, February 4-6, 2010. (International)

[c18] Kleese van Dam K, JP Carson, AL Corrigan, DR Einstein, ZC Guillen, BS Heath, AP Kuprat, IT Lanekoff, CS Lansing, J Laskin, D Li, Y Liu, MJ Marshall, EA Miller, G Orr, P Pinheiro da Silva, S Ryu, CJ Szymanski, and M Thomas. "Velo and REXAN - Integrated Data Management and High Speed Analysis for Experimental Facilities." In Proceedings of the IEEE 8th International Conference on EScience, October 8-12, 2012. (International)

[c19] Seun Ryu, Dongsheng Li, "Optimizing Scholastic Process for Efficient Microstructure Reconstruction", Minerals, Metals and Materials Society/AIME, 2012. (International)

- Journal Paper (6)

[j1] Sae-WoonRyu, Sang Hwa Lee, Sang ChulAhn, Jong-Il Park, "Tangible video teleconference system using real-time image-based relighting," IEEE Transactions on Consumer Electronics, Volume 55, Issue 3, pp.1162-1168, August 2009. (International SCI)

[j2] Sae-WoonRyu, Sang Hwa Lee, and Jong-Il Park, "Real-Time 3D Surface Modeling for Image Based Relighting," IEEE Transactions on Consumer Electronics, Volume 55, Issue 4, pp. 2431-2445, November 2009. (International SCI)

[j3] Sae-WoonRyu, Jong-Il Park, "Real-Time Image-Based Relighting for Tangible Video Teleconference," Journal of Korean Society of Broadcast Engineers, Volume 14, Issue 6, pp.807-810, November, 2009. (Domestic)

[j4] Sae-WoonRyu, Sang Hwa Lee, Jong-Il Park, " Optical Multi-Normal Vector Based Iridescence BRDF Compression Method," Journal of Korean Institute of Information Scientists and Engineers, Volume 37, Issue 3, pp. 184-193, June, 2010. (Domestic)

[j5] Wei Xu,, Xin Sun, Dongsheng Li, SeunRyu, Mohammad A. Khaleel, "Mechanism-based Representative Volume Elements (RVEs) for Predicting Property Degradations in Multiphase Materials." Computational Materials Science 68:152-159, 2013. (International)

[j6] Seun Ryu, Guang Lin, Xin Sun, Mohammad Khaleel, Dongsheng Li, "Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction", International Journal of Theoretical and Applied Multiscale Mechanics 2.4 (2013): 287-297. (International)

- Patent (8)

[p1] Method of rendering a 3D image from a 2D image (South Korea) application number (AN): 1020030042665 (2003.06.27) registration number (GN): 1005333280000 (2005.11.28)

[p2] Augmented Reality Projection System of Affected Parts And Method Therefor (South Korea) application number (AN): 1020050123069 (2005.12.14) registration number (GN): 1007260280000 (2007.05.31)

[p3] Depth based lens distortion correction method (South Korea) application number (AN): 1020130110476 (2013.09.13) registration number (GN): 112013084078522 (2013.09.13)

[p4] Depth Information based Optical Distortion Correction Circuit and Method. (US) Application Number: US14/470358

[p5] Imaging Scaler for Having Adaptive Filter (South Korea) application number (AN): 10-2014-0177728 (2004.12.10)

[p6] Image scaler having adaptive filter (US) application number (AN): US14/962271

[p7] Multi-Scaler and blender based 3D effect generation technology (South Korea) application number (AN): 10-2015-0119322 (2015.08.25)

[p8] Depth information based multi-layer panorama technology (South Korea) application number (AN): 10-2015-0120918 (2015.08.27)

-SW Program Registration (3) in Korea Copyright Commission

[s1] Web Based Parameterized 3D Effect Program/ Registration Num: 2003-01-26-2769

[s2] Hough Transform based Animation Generation Technology in TIP(Tour Into the Picture) / Registration Num: 2004-01-199-004901

[s3] OpenGL Based Interactive TIP(Tour Into the Picture) Contents Authoring and Web Contents Auto-Generation / Registration Num: 2004-01-199-004902

- Patent Technology Trade

[t1] Title of technology: 3D Image Generation Method from 2D Image, Contract Date: 2003.10.6 / Company: DongBangSnC / Price: 52,000,000(Won)

Another homepage: https://sites.google.com/site/ryuseun/home/profile