User:Hrudai Koda/sandbox

Graphic Processing Unit Programming (CUDA) 

Introduction
GPU's Leverage the task of multiple processing by achieving parallelism. When coming to the NVIDIA GPU they have their proprietary architecture in terms of GPU and the Programming language used in running the code in pararllel. The compiler used in CUDA compiler is NVCC. which is a one step above base C complier with included libraries related to CUDA. In CUDA we have a work dividend into chucks and special units called SM's and SP's run the processes allocated to them. Here we have a concept of Device and Host which are GPU and CPU respectively. The memory allocated with the CPU's availability i.e within the RAM cant be accessed directly by the DEVICE i.e GPU, hence we need to first allocate memory within GPU's VRAM and copy the data from CPU's vicinity i.e from Global RAM to GPU's VRAM. Doing this will help in using the variables allocated during the initial execution of the program. Later Invoking the GPU kernel will run the logic given to the GPU which will be written inside the Kernel Function. To learn more: CUDA tutorial

Background
CUDA ( Compute Unified Device Architecture) uses a different architecture which require special hardware requirements limited to NVIDIA's hardware which consist of CUDA cores. CUDA has a special support for several Graphic rendering API's like Direct3D and OpenGL. Initial GPUs were only specialized to parallelize the tasks related to the image processing and the rendering techniques. As the computational capabilities increased GP GPU's were introduced to process and be able to satisfy need of the real world tasks which required heavy computation.

Timeline

 * 2021
 * CUDA Toolkit 11.2.2
 * CUDA Toolkit 11.2.1
 * 2020
 * CUDA Toolkit 11.2.0
 * CUDA Toolkit 11.1.1
 * CUDA Toolkit 11.1.0
 * CUDA Toolkit 11.0
 * 2019
 * CUDA Toolkit 10.2
 * CUDA Toolkit 10.1.2
 * CUDA Toolkit 10.1.1
 * CUDA Toolkit 10.1
 * 2018
 * CUDA Toolkit 10.0
 * CUDA Toolkit 9.2
 * 2017
 * CUDA Toolkit 9.1
 * CUDA Toolkit 9.0

Achievements

 * 1) Computation
 * 2) Advanced Parallel computing capabilities with performance inclination by several folds.
 * 3) Faster Rendering and rasterization computing than serial computation.
 * 4) Dynamic memory allocation based upon the work load
 * 5) Support
 * 6) CUDA is NVIDIA's proprietary technology and its development is aided by NVIDIA