User talk:QuantumFrameworks

LOL (Lesson of Life) Part1 Schooling

Today is 5th March, 2013 Sunday, and i am writing my insight thoughts to share some of my memories that grabbed hold me together till date. And I don't assume that it will hold longer, but I know it will and I am sure of it. Not because I am not immortal, but YES my reminiscences are and will always be an undying essence of my LIFE. Today we are living in 21st century with all sophisticated technologies all around us. What we desire of, we prosper to get it. It is said that human is the only organism who has subjugated this planet. He is the only reason why we are now having new technologies which has enamored our life in varieties of ways. Many individuals, poets, writers, lyricist, authors, to name a few has distinguish the meaning of LIFE in different ways. LIFE....life this name has certain meaning, firm code which all we have to follow. Its our daily protocol which has higher calling. It is our privilege that we have to make good in this LIFE.

LIFE Living In Free Environment

Even the artist has gesture of different angles. May be it painter (who understand the meaning of nature and paints it on canvas), writer (who describes the ethics of personality on a parchment), sculptor (who hands does not just sculpt the molds, but feels the volume of real kind anatomy in it), novelist (whose work just does not include filling the pages, but he/she presents the overall pitcher in front of reader). Every individual living on this globe has a purpose. A purpose in which a deadline is set. He/She has to complete that purpose within that deadline. Every person who is thriving the complete their purpose within limit of time is provided with certain kind of tools which we refer to be as "education".

It was 1997 and I remember that day till date

Similar kind of incident happened with me when i was in school. I still remember that it was spring and my school timings were from morning 8.20AM to 2.20PM in noon with one small break of 15 minutes. We use to have 10 periods that means we use to take 10 lectures of different subject in day. Each subject duration was approximately 45 mins. I was good in drawing and so i use to regularly draw on charts for school. And many of them also got selected for events and regular based functions. Prof.Rao was our drawing sir and yes he didn't taught me how to draw, yet it was plain simple for me to say that i was born with such gift. And for other, they must have thought that i must have been attaining some extra drawing classes after school hours, well that was natural for them to think that way. Even i still remember that in my entire school life (STD 3 - STD 10) i never got any suggestion, advice or any lecture from Prof.Rao about how my drawing was and how I can better-fy it. Plus was i knew if i have any issue I would ask Prof.Rao and it was absolute that his answer would definitely benefit me. But i was not he only one drawing in the whole school. There were more better and quality artist. Much matured in their work. I would always admire their skills and love to talk with them. Many things were un-answered or few. I never draw'd like I am in to some sort of competition. I draw'd because i love to draw. I could sit day-night 24/7 x 365 continue drawing.

My practice became my passion and i practiced my passion. I never got into elementary drawing exam and or any external completion, exams which i was unknown of such. The purity of emotional content of an artist should come not by passion or by practicing, it should come from its own soul, heart and mind. When this three soul + heart + mind comes together an memory is created which you put forth and thus it becomes history as you finish it. All the drawing that i made ages ago (partly which are now lost and some still with me) are nothing but my memories which are now history and thus some of it explains me.... "Oh father, you are my creator and destroyer. You have given me life and you have protected me from all sorts of things". And i say, "It is not me who has created you and it will not be me who will destroy you, but you were summoned by higher calling and I was the only medium for such ". . All my drawing are captivated by these fantasies all around us. It is said that "practice makes perfection" but that does not mean that a person who is perfect is always focused. But a person who is focused is not necessary that he/she should be perfect. I always hated History subject and now I know that it is only through History, we can derive our memories. This was the first lesson I have learned in my LIFE.

I don't remember the exact time and date and the lecture time was going on, i was summoned in teachers office instantly. That was the time i was in STD. 7 division A. As soon as i went in teachers office, there were many teachers sated along their regular seatups. Along was also Prof.Rao. Roa sir ordered me to please call (Shashank Vedpathak) from division B, and i took off that instant. We then both presented our self in front of Rao sir. Roa sir told us that there is National Level Drawing Competition coming up next week and I have selected you (Vikas) as first precedence, but they require a co-artist to adhere as the team. So here you are both, will be representing our school. That was the time even you if got such a chance to represent your school, college or university is such a proud. It felt my floor underneath me was quacking as i have never attended even small exam or competition. And this is National Level Certification. I was half scared and half excited. I was relieved because on my side was one of my friend Shashank Vedpathak. Shashank use to draw not like we people did. Rao sir choose me not because I draw'd a lot, but he must saw something in me.

I still know a small discussion between Rao sir and my parents. I sued to draw even at home may be either personnel or school project. We were having 4 houses Nehru(RED House), Gandhi(Green House), Tilak(Yellow House) and Tagore(Blue House). I still don't remember how I went in RedHouse. Well that has nothing to do with my drawing projects in school. It was open day and my result was with me when me and my 1st Person left the school. There i saw Rao sir entering our school premises and i told my 1st Person he is our drawing sir. My 1st Person met him and exchanged some words which I was unknown of. He asked my 1st Person....

Rao Sir : Do anybody apart from him draws at home ? 1st Person : No, its just him. Rao Sir : Does he goes to any school where he is taught to draw as such ? 1st Person : No i has been drawing since at the age of 3. Roa Sir : He is real good is drawing. And i think he should get nearly 1 hour everyday to practice his drawing. 1st Person : Even if he gets full day he will, keeping aside all his studies.

Since that day I legally got permission to study my drawing. Many of my drawing on large scale charts were selected in many functions. And i think in the history of school I was the only one who has drawn more number of charts till I left school.

Anyways...

In teacher's office room Rao sir told us what to bring, how to prepare yourself in competition , and what time you both should reach the venue. So that day finally arrived. My dad took me in cab (that was the time our financial era was spiting on us). He asked me where is the venue and instead of telling its PDP (Priyadarshani Park) i told its PD ground. Even the cab driver didn't knew where it was. Reporting time was exact 7.30AM and from 7.00AM me, my dad and the cabdriver were just flunkly driving place to place, yet I saw large students standing. I thought they must doing some RSP thing. Finally some time between 09.30-10.00AM my dad dropped me near my school the back gate and he took off towards his office. As i was thinking what should I tell 1st Person, what will my school teachers think....as failure or scared or run-away artist. Even this name artist should not be measured onto me. I was looking at the back gate of my school as I was passing by. A part of me said lets go to school and other said to go home. I was in such a state where one should be right and other should not be wrong. I went home finally compromising with one part of me. That decision of me thought that I would safe home not diluting any comments that my friends or teachers or principal may pass upon me. I wanted to be home as fast as possible before anyone knowing that I failed. I was in such a situation that I couldn't cry, couldn't run away do nothing but just nothing. It's like a small pebble where children kick like playing football and that pebble could do nothing. The time of that situation was not in my hand. You cannot control TIME is the second lesson I have learned in LIFE. AS i went home I explained everything to 1st Person and there after you know what happened. 2 to 3 hours wasted, money wasted, time wasted, school study wasted.

From next day as I went to school Rao sir called me asked what happened and I explained everything. My friends, school teachers asked me even Shashank asked me what happened. The reply was the same. My answer didn't changed. But their question changed ME. After meeting Rao sir, Shashank and me met in outside their office and i asked him (before going to competition) can we go together. To his reply was he would come with his parents directly at the venue. I just needed some HELP, and unfortunately I didn't got that. Also Shashank could have told me to tag with him. But that was his concern. And yes it was not his fault either, if he didn't want me to tag with him it was his decision and a good one. What happened after that is what I don't know. Yet I contributed my drawing to my school. Neither trust nor depend on you friend is the third lesson in LIFE I have learned. And always listen to your parents is the fourth lesson of LIFE I got.

Shashank got that credential and won as on second place. The saddest part was after I knew something I was nothing. That thing ended after I left school but that memory lasted with me and it will be with me as long as I am going on with my LIFE. Memories don't change but people do. Good times becomes good memories but bad times becomes good lesson they say, and yes I have been through it and know what the outcome is. I do not have large collection of art in any sort, but what do i have is the experience and the memories that will last forever.

Today I have preserved little collection of my drawing that I have solely learned on my own no teachers, no guidance nothing. Many of my drawing were destroyed what is left is posted on my social network site. Not all my art contains emotional content, but few of them do have emotional touch to it. Only an artist or an professional hobbyist could understand this language. The most painful thing you see for yourself is getting your work destroyed in front of your eyes and you can do nothing. Art doesn't need any language. Art itself is a language we all speak, live and die for. Art is such an in-visible language that no one can teach to nobody. Above I did mentioned about soul + mind + heart becomes your centre point of concentration which sooner or later becomes your memory when you finish your art. And when you finish your art that memory becomes your history. And if you look backwards at your history all your memory will refresh your art and time and all the emotions that you have put in.

It is all cycle of nature and I know for sure that it is coming. And i am still hungry for art.

Sunday, April 06, 2014 Vikas Nagolkar

LOL_v_2.000 Lesson Of Life
Saturday, April 12, 2014

LOL (Lesson Of Life) v.2.000

Brief history of Jivdani Mata

The Goddess rests in a temple situated about 1460 steps above the ground on a hill that forms a part of the Satpura Range in Virar, a northern Mumbai suburb, about 60 km away from Mumbai. The hill offers a very picturesque view of Virar and its vicinity. During the nine days of the Navratri festival many followers visit the shrine, and devotees also tend to visit on Tuesdays and Sundays.

The name Virar comes from Eka-viraa. Just as Tunga Parvat becomes “Tunga-ar”, similarly “Vira” becomes “Vira-ar”.There is a huge temple of Eka-vira Devi on the banks of Vaitarna River at the foot hills of Tunga Parvat, (this is now totally broken by the continuous raids of Mohamedeans and Portuguese in last 400 years), where people used to conclude their “Shurpaaraka Yatra”, as described in the Puranas and local legends. There is a huge tank here dedicated to Eka veera Devi called “Viraar Tirtha”, i.e. “Eka- Viraa Tirtha”. Even today, on the west banks of Viraar Tirtha, one finds a carved stone about three feet long and nine inches broad. Below that is a group of female figures of the Yoginis of Ekaveera Devi. Nearby one can find a stone with a roughly cut cow and calf (Savatsa Dhenu), a symbol of Govardhana Math which symbolizes eternity or Moksha. Moving ahead near the foot of a knoll of rock are two cow’s feet (Go-Paad) roughly cut in rock. The legendary story of Jivdani Devi is as follows: During their forest journey, Pandavas came to Shurparaka. They visited the holy temple of Vimaleshwar consecrated by Lord Parashuram and on their journey to Prabhas halted on the banks of Vaitarni river. There they worshipped the Bhagavati Ekaveera on the banks of Viraar Tirtha and seeing the serenity and lofty nature decided to carve caves in the nearby mountains. They did so on the hills nearby and installed and worshipped the Yoga Linga of Ekaveera devi in one of the caves. They called her Bhagavati Jeevadhani (That is Goddess, who is the real wealth of life). Doing so Pandavas also made a set of small caves now known as “Pandav Dongri” about a mile from Shirgaon for the hermits. Many yogis used to stay in Pandav Dongri and have darshan of Jeevdhani Devi. After the onset of Kali Yuga, and after the advent of the Buddhist faith, the number of Vaidik Yogis lessened and slowly people forget the hillock and the devi. During times of Jagadguru Shankaracharya’s advent, a Mahar or Mirashi used to stay in Viraar who used to graze the village cattle. He came to Nirmal Mandir for the darshan of Jagadguru Shankaracharya Padmanabha Swami and requested His Holiness to bless him so that he could have darshan of his beloved Kuladevata. Jagadguru was pleased with the devotion of Mahar and advised him to serve Go-Mata on the foothills of Jivadhani, and at appropriate time he would have darshan of his Goddess and attain Go-Loka. He literally for the rest of life followed the advice of Jagadguru Shankaracharya and herded the village cattle. While grazing the village cattle, he used to see a cow grazing along with, whose owner never paid him for herding her. By his virtue, he determined to find the owner of the cow. He followed the cow on the top of Jeevdhan Hill. A beautiful woman with divine features appeared. The Mahar remembered the words of Jagadguru Shankaracharya and understood that she is none other than his Kuladevi Jeevdhani, he was overjoyed and asked “Oh Mother ! I have grazed your cow, will you not pay me for her herding ?”. The Devi just smiled in delight and was on the point of putting some money in the Mahar’s hand, when he said “Do not touch me, I am Mahar. Give me something which cannot be spoilt by touch, words, smell, figure, and ether.” Knowing this Devi asked “Lo my child, whence from you learned this unique knowledge of Varnashram Dharma and Moksha Dharma?”. To this Mahar replied, “From none other than by the Grace of Jagadguru Shankaracharya”. Bhagavati was pleased by this and said “By your virtue (Punya), see this cow which is none other than Kaamadhenu has taken your forefathers to higher abodes by her tail, crossing the Vaitarini”. Thus saying the Mahar saw the cow lept from the hill top putting her two feet prints on hill foor and other two across Vaitarini River in heavens. Now Devi told, “I confer upon you the thing which you demanded that is Moksha.” Saying so the Mahar attained Moksha (The real Jeeva Dhana, the real wealth of Life)and the Devi was about to disappear in the cave, when a barren woman saw all this divine incident screamed “Devi Devi, Amba Amba, will you leave this barren daughter of yours without our jeevan dhan a child in my laps?”. Devi was pleased by her prayers and said “ Great indeed are you who saw all three of us. I henceforth bless you with a child.” The lady was not satisfied by this, she said “Oh Mother of the three worlds, do not just bless me, but let all barren daughters of you who pray you be conferred with the child”. Devi was pleased at this and said “See henceforth, due to the advent of Kali Yuga, in order to maintain purity of rituals, I will stay into a hole in the niche of the cave. The barren women who offer me the beetlenuts in this hole, as is offered in my original place in Mahurgad, will be rewarded with a progeny”. Thus saying the Devi disappeared. This lady spread out the incident and thus once again the Jeevdhan hill started to be visited by the pilgrims. The presently installed image is a very recent one, the original sanctum sanctorum is the hole in the niche of the cave, which is the central place of worship. A fair is held on the Dusherra day which is attended by thousands of people. The fort is visited by tourists frequently. The temple of the Devi is completely renovated and there is a beautiful idol of Devi in white marble. There is also a temple dedicated to Sri Krishna.

LIFE....in my previous article i shared what is LIFE....Living In Free Environment ...YES if by nature we all should know what it truly means. There are very few people on this planet who has lived more than that no one could have possibly imagined could happen. What i mean to say is the average life period of human today is 100years to say. And we have witnessed people even challenging their life period crossing over 100 and still continuing. Well some people say they lived because of good karma while some other say because of keeping good health or may be partly describes as yoga. Every individual has different way of judging people. That does not mean that they are not wrong. Neither they are purely right. Science is said to have all the answers we need. But do science know what happens when a person is dead. Or where he'll be moving onto? How the first human was born in this world. If you Google it, you'll find 1000's of different answers theoretical assuming that this might me right or that might not be wrong too. Has science ever proved that there is definitely an afterlife, or ever they had proved that GOD does not exist.

There is no single soul on this globe who could have written Bible, Kuran, Shrimad Bhagwat Gita with such a captivated, illumination the exact and pure meaning of LIFE. These planetary system, the cosmic order, the universal truth all is controlled by some supernatural authority. Science can make anything but cannot create LIFE or just put in LIFE. Why can't we control Tsunami, Tornados, volcanic eruptions....do science have answer to this. I think NOT. My main reason in writing this article on LOL is what i have experienced just couple of days ago. 2014, March 4th, It was Friday and i was having no work no schedule. I was totally free to do anything i wanted. And i knew this a day before (i.e., on Thursday). I planned to go to Jivdani Mata Temple (a holy and religious temple in Virar-EAST) for prayer. Also thought that it would be good enough if i leave a bit early from home. I opened up my book cupboard and took our my small black coated leather bag and stared assembling one mid-level 200 pages dairy, couple of pens and additional hankie. It was a kick start of summer if I recall. My cell (both Samsung Grant and Lenovo) were not fully charged and i thought i might need to save the battery, so made my cell phone OFF. I woke up next day did all my regular activities and as soon as 10.00AM, i started to get ready. Destination Virar....Place Jivdani Mata Temple. I took off. Got hold of cab and rode to nearby railway station. As I was going to Virar from Dadar station, something got hold back in me saying do we need to go there. I mean we can go to mall...hang around and have some checking out on book sections. As it was Friday I was on fast. I went to ticket counter and got in line. Good thing was it was less crowded. A line next to me was the line for pass and a skinny old lady was standing there. Anybody could have said, she was in real hurry. She asked me to take out ticket for her and provided me her money. She was on the way to CST and me to Virar. 2 opposite direction. From churchgate (starting point) to virar was 59.98 KM exact and from dadar, 48 KM. I went down the platform and waited out train to come. It was about 11:26 i got fast local for Virar. In between that time, I thought that i should head back home and do something else, some other work rather than bewildering around in this hot ambiance. This was the third time something got hold in me, stopping to do what I was doing. The train arrived on platform no. 3. I got in but did not sat. I took out my earphone and stared listening songs. Just to make time pass. It took almost 1hr and 30mins to reach Virar station. Out I went and took share auto-rickshaw. Everyone was going to same place where i was headed. It took nearly 10 mins to get at the bottom of the temple. To tell you that i have been to Jivdani Temple couple of time earlier. First was with my college friends during the 36July, the day the Mumbai was washed out in rain. Many people died, many injured , many lost and thousands were stuck for days. Second time was with my friend from my job along with the head of administrator madam.

I then started walking towards the staircase. 1490stairs overall are there. So you have to walk up this 1490 stairs and then you'll reach the bottom of the temple from there you have to go one floor upon queue form to enter the perimeter of the temple. I started to mount the set of steps. Prior to that I bought one nice chilled bottle of water to quench my longing. This was the first time i was damn tired. I dunno what, yet i was in such condition that legs were shivering, and was sweating like horrible. It was like my blood pressure was dropping slowly. My body was not in state to move ahead. I took aside of rod next to where i was standing and sat there. As soon as i sat down everything went bright white. I couldn't see a thing. Again that bright light faded away, but not all. There was a beggar sitting just opposite to me. He was constantly looking at me as of what was happening to me. I was part conscious and part unconscious. That brightwhite was like playing game with me fading in and fading out. I was not in the state of doing anything. I sat there with my bag in one hand and bottle in other hand. I couldn't think of anything else just for ONE little HELP. I know that in blistering season, this things take place but not as what I was going through. These were two different things. There were an couple with small kid with them passing around, I asked aunty how much long is it the temple. What I meant to ask here was how much further or close is the temple. She simply replied you are not even just half way. She went ahead. That situation was so impulsive that I couldn't open my eyes, and if I did the same light thing could occur again and again. 20-30minutes that thing occurred on me. But the last 10minutes were horrific. I thought that it was the last day on my life. That last 10mins that light was getting more brighter and brighter, I was getting more dizzy. I couldn't think of anything else. More sweat, more dizziness and more birghtlight just made my day horrific. May be I thought I would have been best for me stay at home when something got hold in me. May this would have never been happened. OR else if happened there would someone who could have taken care of me. May be see the physician or such. This was the fifth lesson of LIFE I got. I again drank little water and seated there without any movement keeping my head on my lap. The homeless person could helped me but he was disable. He couldn't move. He just kept looking at me watching both side ways f anyone could help this guy. Or at least could have ask what the matter. I don't have vertigo (is an common medical concern for those who are afraid of heights). I have climbed mountain even more than this height. Anyways, I seated again for next 10minutes.

During that frame of time, I thought I would now leave back to home and say nothing. But the another thought came, go to the bottom and take the rope way straight up towards the shrine. I was again in to and fro situation. And this was HELPESS situation. No one could explain and no one could have advised either of them.

I was inside a complex situation of no state of mind. I waited there and prayed for a while, ''If I had done any few little good karma, let me worship you inside your temple and then I am yours" . I waited few seconds , got up , washed my face drank few little drops and started to move on. I got up and gave the chilled bottle of water to that homeless guy. I don't know how I got that energy to climb in such a situation but yes all thanks to Mata who showered her shine of light over me and showed me the way. I was all thinking about the tiredness and how that took off.

I went to climb more stairs and I thought that it might occur again so I seated down. I went along using this process climbing up and then seating down and asking few of the vendors there the current time and how much long is it from my current place. This way I made up to the main access stopping nearly 10-12 times in between. Finally I reached an area where the another stairs lead directly inside he the main chamber of the shrine.

I bought a basket of materials that is needed for worshiping Mata. And went up By the time i reached the temple was closed for 30minutes for cleaning purpose. So I sat down the queue. After sometime another bell rang and we were sent inside the temple. To be sure i was not able to see the long distance more clearly due to that bright white light flashing my eyes. I went in and worshipped Mata and prayed and stayed there for a while and then I started to move along down ways now, heading home. By the down way i took another bottle of chilled water and just drank another few drops, i saw an old lady begging for some money at me. I thought instead of money which at this season is not that much use to her so i gave my bottle full of chilled water to her and took of. This was the sixth lesson of LIFE to me....Help the neediest....Think of other first rather just thinking for yourself.

My legs were shaking. 1490 steps down is not an easy job when you are completely exhausted. I took just two stops and then I was at the bottom of the main steps. There was a pitcher of Mata framed at the bottom. I prayed and went to take share auto-rickshaw. I took another bottle of water before taking share rickshaw. Once I reached Virar station, i quickly speeded up my pace to catch the Churchgate fast local on platform 1. As usual i was standing on door and managing to plug-in my earphone, a small kind came begging around again and I provided him with the filled chilled bottle of water rather than money.

Train started and I was on my way back home.

I think because on my little karma, Jivdani Mata helped me giving me some little strength to reach and worship her. This was the most important lesson in LIFE I have learned...."Believe in GOD (Generator + Operator + Director), have faith in him and then leave rest all to him".

Speaking of Files ?
Short History of 3ds Max Around the release of Autodesk 3D Studio R2 (DOS) which was the first version to feature the IPAS interface for 3rd party extensions, the Yost Group developers of the application (Gary Yost, Tom Hudson and Dan Silva) started thinking of a next-generation software based on Object-Oriented Programming techniques and running under a Windows OS. After hiring some more developers, the team delivered two more releases of 3D Studio DOS (R3 and R4) while actively developing the project code-named "Jaguar". 3D Studio MAX was officially announced at Siggraph 1995 and shipped to users in April 1996. At the same time, the Autodesk Multimedia Division was rebranded as Kinetix, a division of Autodesk. Thus the full name of the official product was Kinetix 3D Studio MAX. The product contained about the same feature set as 3D Studio DOS R4 but implemented all tools using a completely new object-oriented, procedural modeling paradigm featuring the Modifier Stack, an easier to use linear version of the Prisms/Houdini procedural pipeline. Some elements like the Material Editor and the animation controller system were largely enhanced compared to the DOS version, and the render subsystem allowed for volumetric effects and 3rd party plug-in renderers (which started appearing shortly after the first release - RayStudio and RayMax being the first two available). Release 1.0 required Windows NT 3.51 and supported the first 3D Labs GLiNT hardware accelerator cards available for the PC via custom Heidi drivers. There were two point updates - 1.1 and 1.2. The SDK shipped with 1.1. 1.2 was an update to support WinNT 4 which featured the Windows95-style UI. 3D Studio MAX R2 (code name Athena) was officially announced as Siggraph 97 in LA,CA on August 4th 1997 and shipped to customers on September 24th, 1997. It included over thousand new features and workflow improvements. The most notable additions were Ray-tracing in the Scanline renderer via Raytrace materials and maps developed by Blur Studio's Steven Blackmon and Scott Kirvan (who later split to form Splutterfish and develop another popular renderer - the Brazil r/s). Lens Effects Post Effects licensed from Digimation NURBS modeling tools MAXScript programming language licensed from John Wainwright/Lyric Media OpenGL support There was one point release - 2.5. It was the first and only non-free point release in the history of 3D Studio and included among other enhancements NURBS additions (support for Trims) and VRML import support. 3D Studio MAX R3 (code name Shiva) was announced at the Game Developers Conference in April 1999 on San Jose, CA and was released to customers on June 15th 1999. It was the last version to be published under the Kinetix logo, although the division was already merged with Discreet Logic but had no Corporate Identity design yet. The core of the program was largely rewritten to allow better integration of MAXScript and the Scanline Renderer was enhanced with support for pluggable Anti-Alias filters and Supersamplers. The User Interface was redesigned to support larger true-color icons on customizable tabbed toolbars where custom MacroScripts could be placed by the user. Code Name Trivia: Shiva is the Hindu god of destruction, thus the project code name signified what the core of the application went through before being recreated. At the same time the Autodesk VIZ version from the same development cycle was code-named Kenny, for exactly the same reason! During the same time, Gary Yost, the "father of 3ds Max", was working in complete secrecy on bringing mental ray to 3ds Max in a project code-named Ganesh, a name closely associated with Shiva. (The connection for 3D Studio MAX R3 to mental ray stand-alone shipped to customers in May 2000) The point update to 3.1 is considered by many the most stable version of the software in its history. Discreet 3dsmax 4 (code named Magma) was initially announced at Siggraph 2000 in New Orleans in an early technology demo. It featured among many other things a new IK system, QuadMenus context menus and a unified ActionItems UI customization system, ActiveShade render preview mode, a redesigned Modifier Stack (Stack View) with support for Drag And Drop, new Editable Poly modeling toolset, DirectX Shader support in viewports, ActiveX support in scripted rollouts, MultiRes mesh optimization based on Intel technology and more. There were two point releases to individual customers - 4.1 and 4.2, and a special 4.3 update which was for Educational users (schools, universities) only. Discreet 3dsmax 5 (code named Luna) was the first release ever to support the plug-in format of the previous version. Plug-ins developed for 3dsmax 4 could be used in 5 without a recompile, while both 2 and 3 required completely new versions. The biggest addition to 3dsmax 5 was the Advanced Lighting sub-system of the Scanline Renderer where two new plug-ins were introduced - a brute-force Global Illumination module called Light Tracer and a Radiosity module based on further research by the developers of Lightscape. (Historical note: Lightscape was acquired by Discreet Logic a couple of years before the Autodesk acquisition.) This also incl. Photometric and Day lights support. Further additions were the inclusion of Reactor (previously a separate plug-in published by Discreet based on the HAVOK dynamics engine); Set Key animation mode; a refactored Track View with Curve Editor and Dope Sheet modes and an enhanced UVW Unwrap editor Render To Texture feature New Named Selection Sets editor New Transform gizmos Character Assembly and Bone Tools Spline IK Gimbal rotation mode Auto-Tangent interpolation Improved Skin Modifier with Weight Table Improved HSDS modifier UI Support for Layers (taken from 3ds VIZ) Ink'n'Paint Material Translucent shader On the human resources side, it is interesting to note that the product was developed under Chris Ford, previously senior Maya product manager who moved to Discreet when Alias dropped Wavefront. (He is now PRman business director at Pixar). Other related 3D Trivia: Bob Bennett, previously product manager for 3d Studio DOS, was Maya Development Manager for many years until the Alias acquisition by Autodesk - he is now with Luxology. There were three point updates - 5.1, 5.1SP1 and 5.5 (the latter was the extended version with the Particle Flow extensions). Discreet 3dsmax 6 once again required recompiled plug-ins (which later would be usable in 7 and 8). The main new features were mental ray as alternative renderer Particle Flow (previously shipped as an Extension to 5 for users on subscription), a refactored Schematic View Shell modifier new Vertex Paint Reactor 2 dynamics network support for Render To Texture Discreet 3dsmax 7 (code named Catalyst) was an evolutionary update on top of the 3dsmax 6 core. Main new features were: new Editable Poly tools incl. Bridge, Deform and Relax painting, Soft Selections Painting, Preserve UVs option etc. new Edit Poly modifier which was supposed to ship as Extension to 6 but made it into 7. Support for Normal mapping generation and rendering mental ray 3.3, incl. Sub-Surface Scattering and Ambient Occlusion shaders and Render To Texture support Per-Pixel camera mapping Flat shaded view Character Studio 4.3 included in the base package SkinMorph and SkinWrap modifiers TurboSmooth modifier Parameter Collector Refactored Reaction controller (formerly known as Reactor controller) Walk-Thru mode for First Person navigation in the viewports. Autodesk 3ds Max 8 (code named Vesper) was published in the Fall of 2005 and was the first release in the history of the product not to break the SDK compatibility in a 3rd major update - in other words, plug-ins from 6 and 7 could be used in 8 without the need for a recompile. The Discreet division of Autodesk was moved closer to the Mother Ship and turned into "Autodesk Media and Entertainment Division", AMED or Autodesk ME for short, leading to a full circle in the history of the 3D Studio line which started as Autodesk 3D Studio in 1990. The "M" in "Max" was capitalized again. Main new features were: Asset Tracking with support for 3rd party solutions and Autodesk Vault shipping with the package. Enhanced XRefs MAXScript Debugger Support for Scene States Hair and Fur (shipped as Extension to 7 earlier that year, based on Joe Alter's Shave & Haircut) Cloth (also available as Extension to 7, based on Size8's ClothFX, formerly known as Stitch) Editable Poly enhancements - Shift Ring and Loop, better Bridge and Edge Connect, Open Chamfers option, clean removal of edges. Enhanced Skin tools incl. Grow and Shrink, Loop and Ring, Weight Tool Enhanced Unwrap UVW with Pelt Mapping support, better Relax options and Render Template tool Sweep modifier and enhanced spline options incl. rectangular cross-sections Brush Presets Real-world map scale Motion Mixer support for non-biped objects Autodesk 3ds Max 9 (code named Makalu) was the first release to include both 32 bit and 64 bit builds of the software. It shipped to customers in October 2006 and required once again recompiled plug-ins due to the switch to a newer Visual Studio compiler and because the MaxSDK6 was getting old and was in need for an update to fix long-standing bugs. A 64 bit version of 3ds Max was demoed as early as the year 2000 when Intel was attempting to introduce the Itanium line of CPUs. A "real" 64 bit build of 3ds Max 8 for the x64 architecture developed under a project name "Scopic" was shown to the audience of the Autodesk User Group meeting at Siggraph 2005 and was later merged with the Makalu project to deliver both 32 and 64 bit on the same DVD for 3ds Max 9. Major new features: Project Path support incl. support for relative paths Proxy Textures Manager .NET support in MAXScript incl. classes, objects and UI controls. ProBoolean and ProCutter (shipped as Extension to 8, based on PowerBooleans 3rd party plug-in), enhanced in this version with MAXScript exposure of ProCutter. HAVOK 3 engine support in addition to the existing 2 better mental ray 3.5 integration with support for Physical sky and sun, Arch.&Design shaders and more. Faster screen redraws in Direct3D mode incl. incremental D3D Mesh cache updates, faster spline redraws and more Viewport Stats option for all viewports. New Hidden Line viewport shading mode Support for CG shaders Animation Layers Hair styling in the viewport, support for reflections Updated PointCache incl. interoperability with Maya 8 (which uses the same cache format) Better interoperability via FBX Autodesk 3ds Max 2008 (code named Gouda) was "demoed" at Siggraph 2007 in San Diego, CA and shipped to customers on October 17th, 2007. It was SDK-compatible with 3ds Max 9, allowing plug-ins for the previous version to once again be used without a recompile. The name change allowed Autodesk to align all its products within a fiscal year - Autodesk Fiscal Year 2008 started in March 2007 - and signify to users which versions are interoperable (for example, 3ds Max 2008 should be able to import/link data from AutoCAD 2008 and Revit 2008). The SDK version number still shows the internal version as 10. Major new features: Core code optimizations leading to 10+ times faster viewport performance with 10K+ objects Faster selection, material assignment, transformations, parenting, layer assignment operations Adaptive Degradation system updated to perform view dependent object culling (similar to the Object Culling Utility which was a prototype of the system and has been removed) Scene Explorer framework developed as a testbed of running Managed Code and DotNet components inside the 3ds Max application. Review per-pixel lighting and shadow casting from up to 64 lights using Shader Model 3.0; preview of mr Sun and Sky in the viewports preview of Arch&Design mental ray shader in the viewports MAXScript tabbed Editor ("ProEditor") based on the open source Scintilla controls and SciTE editor with features like multiple documents in a single tabbed interface collapsing and expanding of code blocks search and replace supporting Regular Expressions Bookmarks extensive style and color customization controls with support for various languages and per-directory style definitions auto-complete and macro definition features customizable right-click context menu Find In Files options to search for a string in multiple files optional support for version control systems like Perforce or Subversion Inclusion of all Avguard DLX extensions into the MAXScript Core Working Pivot mode for quick object and sub-object transformations about an arbitrary point. Selection Preview mode in Editable Poly Edge Chamfer Segments in Editable Poly Support for a file per frame using Maya's native Point Cache format as an option to share baked deformation animation between 3ds Max and Maya. mental ray 3.6 enhancements including Sky Portal light for transferring outdoor lighting into indoor scenes Photographic Exposure Control Photon Emission from A&D Material mental ray Production Shaders included but unsupported (these are supported in 3ds Max 2009) Various improvements to the Character Studio Biped Keyboard shortcuts override system Autodesk 3ds Max 2009 This version was announced in February 2008 and released on March 31, 2008. It is the first (and probably last) full release built in shortened development cycle of just half a year. This was done to align the release data of all Autodesk Products and also make it clear that product A will work with product B if both carry the same fiscal year number. While the SDK is unchanged, a compiler change makes the recompilation of plug-ins necessary, but with very little overhead for the 3rd party developers. Another major change is the introduction of a dedicated version of 3ds Max for the design and visualization market called 3ds Max 2009 Design. The two flavours of 3ds Max 2009 use the same binary and are fully compatible to each other including file format, data and plugins, but have different icons, slpash screens, documentation, tutorials and learning paths to enhance the user experience. There are only two differences between the two versions - the "Design" version does not include the SDK and the "Entertainment" version does not contain a Lighting Exposure Analysis tool developed for architects performing LEED certification. Major new features are: Unified view navigation controls with most Autodesk products using the ViewCube (alreay in Maya) and Steering Wheel system providing orbiting, first person walk-trough, fly-trough and a viewpoint history features for casual users. Photometric Lights have been reworked and streamlined with more area light shapes, photometric web previews in the file dialog and the viewport, realtime preview in the viewports, Falloff controls for accelerating photometric lights processing. New Iterative rendering workflow with simplified controls in a renderer-specific control area in the Virtual Frame Buffer (fully scriptable) including: Caching of Geometry and GI for fast reshading. Support for rendering pixels of the selected object only. Region rendering with gizmo display in both the viewport and the VFB. Updated Composite Map with support for various transfer modes, masking and color correction per layer. New Color Correction map. A&D material hardware preview in the viewports now supports shadows. Autodesk ProMaterials (shared between various products) for simpler scene setup and data interchange. mental ray Production Shaders are now enabled and supported. mental ray Proxy object with animation support. mental ray provides new auto-balancing BSP2 acceleration method. The Daylight system now supports various weather models including control via weather files. Multi-threaded Hair buffer rendering and viewport redraws. Support for Skylight. Character Studio "Hands As Feet" support for quadrupeds and in place mirror option New direct Soft-Selection manipulation workflow. MAXScript improvements incl. enhanced UI controls and a new binary search method for fast data access in sorted lists. Improved interoperability with Autodesk Revit via FBX and Metadata. New OBJ I/O plugin licensed from Guruware. 3ds Max 2009 Design Only: Lighting Analysis Tools with Light Meters and Light Overlay for measuring light intensity from physically-based sources Autodesk 3ds Max 2010 Released on April 18, 2009. Major new features are: Graphite Modeling Tools (formerly PolyBoost) integrated via a Ribbon interface. Introduction of Containers for sharing and publishing content between scenes. XView Geometry Checkers for interactive checking of error conditions in scene objects. Ambient Occlusion and Exposure Controls preview in the viewports. Soft-shadows support in the viewports. MetaData support in the .MAX file allowing for external access to asset information without opening the scene in 3ds Max. Quadify Modifier based on ProBoolean Technology. Cloth Tearing and Inflating. mental ray Global Tuning Parameters mental ray FG and GI cache interpolation New Material Explorer based on the Scene Explorer technology. New OBJ I/O plugin. Flight Studio plugin included. ProOptimizer based on PolyCrunch technology. ProSound with support for up to 100 sound tracks in stereo Particle Flow Advanced based on Orbaz Particle Flow Tools Box #1. Autodesk 3ds Max 2011 Released on April 8, 2010. Major new features are: New Slate Material Editor based on NodeJoe technology. New modeling tools including Object Painting. Enhanced Viewport Canvas in-view painting tools using layered PSD files. CAT (Character Animation Tools) included in the package. Support for Local Edits to Containers content. Quicksilver hardware renderer based on DirectX technology with support for hardware and software anti-aliasing, Soft-Shadows, Depth Of Field, Ambient Occlusion, Indirect Illumination, Reflections. Support for 3ds Max materials representation in the viewports. Autodesk Material Library with around 1,200 materials. Support for saving to 3ds Max 2010 file format. FBX File Link of Autodesk Inventor Files. Improved Autodesk Inventor import. Native Solids import based on nPower Software technology. Ribbon Customization. In-view Caddy manipulators. Sliding Command Panel New OpenEXR I/O plugin based on Cebas technology. Autodesk Composite 2011 based on Toxik technology included.

Small brief views on the core principles of how Scanline Render Engine works ?
Now if asked many of artist would say giving final appearance to our scene.OK Well that is just their own practical approach to it. The second statement we get is the final look development where look development is pretty different from what we see or what we say for rendering too. OK as even in differentiating FG and GI i mentioned that definitions are get going for starters but for professionals the practical approach is far more important. Many artist depend upon definitions and many don't it is just the matter of fact............. Now lets get back to rendering ......shall we then....... What is rendering ? :- Process of generating images from a description of 3D-Data into 2D-data.But if you see from calculating point of view, rendering is a carefully engineered program based upon the exegesis of the disciplines like Physics in lights , mathematics , visual perception and software development. Not always but yeah rendering is purely based upon GPU and not CPU...where-as GPU is nothing but Graphic Processing Utility it means that whichever rendering you are using for your scene to get calculated in your project uses the acceleration of Graphic card utility like Direct-3D and Open GL. There are very limited applications that can switch from Direct-3D to Open GL and vice-versa and one of them is 3DS max. But if you see Maya, XSI, C4D, Lightwave 3D, Blender,etc uses open GL as an Open GL gets run over real time simulation but Direct-3D is not real time simulation. Note:- Playing your animation in viewport doesn’t make your application real-time there are procedures and hidden principles to visualize Open GL strategies. So what you can say now What is rendering>>> It is an signal processing where the entire vertex data is calculated via inputted mathematical algorithm and outputted via printing them onto analog system. Now this is pretty technical statement as i said in the condition of this Quest.Let see what is inside rendering............................shall we then..... Now how does this Scanline Render Engine works ? Now you guys tell me in real world whichever element is thrown up has to come down due to gravity, right !!!! Yeah even in rendering we throw mathematical values and we get tell our engine to shoot out your line of scan to process our values from specific point to another specific point. In real world if i throw a ball up reaches one point and when it falls down it reaches another point, so here we get two points and both of them are acceptors to each other. What my point is.....simple......Scanline render engine works row-by-row ( by calculating or say by scanning the entire vertexes representing in our scene). All the calculations that need to get simulation are set in respect of Y-axis or coordinate space.Scanline calculates from top to bottom as if it have its own gravityscan.Now this is confusing where the hell this gravityscan came into scanline. Always remember that no matter how small any element is there in our world contains some mass in it. So as when you render with scanline engine the white color line the flows from top to bottom contains an diffuse mass and this property is exercised by our render engine and makes our line of scan to flow down and not up. If you want to measure this scanline line it is very simple it is ( 1*1 = i px * 1px, i am not talking about pixel aspect ratio but of pixel frame . The resolution of scanline line changes with the change in the pixel aspect ratio.So inshort line of scan is dependent upon the pixel's ratio and the frame ratio ). A line of scan in the X-axis scans out vertex data in Y-axis form which appears first i.e., all our data (3 - D data ) playing around in our scene are calculated row-by-row but column wise.This procedure continues until entire segment is covered up.Now all the data calculated rowbyrow are stored into specific location or gets cached out via RAM. What merely happens in this process is the frame you have set gets subdivided into number of sequence of horizontal strips called "Scanline". When it is subdivided the table is formed i'll talk about the table later in this section........................................... That is the reason why each production houses ask artist if they know about rendering basic one not V-Ray, MR, Maxwell render or any in the market. Since when you work in production house for delivering the final output of the video, how it should be i mean for which broadcasting.....TV , computer , Imax, PVR ,  Epic, etc.....so they go for analog system and digital system. Analog system contains vertical scanlines and digital horizontal scanlines. Remember when your TV runs out of cable you get to see some tiny points appearing very noisily, that is not line of scan , but that is cathode rays shooting up radiation and hitting the cover behind the glass , this points are radiated cathode points. Scanline gets Pixilated :- Its amazing right nobody must have thought that how come scanline can get pixilated it is not an imaged formed by any resolution, but the fact is YES it gets pixilated , and how we shall see now.....shall we then........ It was just yesterday my friend in my studio fired up Backburner net rendering for his project while i was on the verge of leaving studio. Well you know when you talk about rendering, is Xtremely vast even the components and the core components can be further broken into various components and core components too. But that will become Xtremenly technical ' i'll write about that in future when time comes..............for now let see how scanline render gets into pixilation..............But if you speak this to other artist some may roll their eyes thinking what a stupid fellow and they will get continue the topic leaving scanline pixilation behind. Now, if you have noticed that specially for DigiMattePainting setting up DPI  /  PPI along with resolution is more important ( artist get the status for setting up the requirements creative people do not think about all this values but technical artist  have to. ) Ok lets get back to our topic.If you speak about scanline pixilation or Scanline Rasterization you should be knowing about resolution, pixels and bit / color depthness. Let us see this individually it may become crystal clear for all of us.........shall we then.................. Resolution:- I have come across hundreds of definition for this one..........."Resolution is collection of PIXELS". , pretty simple......yeah it has to be. When i say that resolution is collection of pixels.........................it fairly states that you get a frame containing pixels, and the amount of pixels depends upon the scene.If you see the output size of any software you'll get some in-build sizes like Vistavision , HDTV , Pl , Pal-D-1 , NTSC , NTSC-DV , NTSC-D-1 , 35 mm 1.316:1 Full Aperture ( Cine ) , Anamorphic , Imax , 35 mm Slide and many more along with custom size too. Now when you hit any one of the build in ones you get to see fixed values that  cannot be changed. Let say if my project is consist of Vistavision of 1K resolution surely I’ll go for vistavision 1K ( 1024*683, 1536*1024 , 4096*2731 ) taking this further let say i take 1024*683 of resolution and remembering what resolution means of collection of pixels........we get something like this ( 1024 --->>Width and 683--->>> Height) as per the binary coding system everything is rough not smooth not in real world too and not even in the digital world too. It is but-obvious that you cannot see anything if only width is rendered or as height is rendered out. Yeah, so to see the clear pitcher of what we are dealing with we go for binary system (W*H i.e., 1024*683=699392 remember resolution=collection of pixels.).This width contains 1024 pixels and height contains 683 pixels , when you sy collection of pixels means multiplication of one point with another point to get net point ( net result ). and 699392 is our net result when you set you frame size to 1024*683, similarly the same calculation goes with ............. Output size :- Vistavision Frame size :-W=1536, H=1024 , net result=? Calculation:- Frame size = W*H so for net result = 1536*1024=1572864 pixels. So if you set your resolution to 1536*1024 you'll get 1572864 amount of pixels inside your frame or window.

Output size :- 35 mm 1.316:1 Full Aperture ( Cine ) Frame size :-W=2048, H=1556 , net result=? Calculation:- Frame size = W*H so for net result = 2048*1556= 3186688pixels. So if you set your resolution to 1536*1024 you'll get 3186688 amount of pixels inside your frame or window.

Output size :- Panavision Frame size :-W=1536, H=698 , net result=? Calculation:- Frame size = W*H so for net result = 1536*698=1072168 pixels. So if you set your resolution to 1536*1024 you'll get 1072168 amount of pixels inside your frame or window. These are few examples i have given here........you can calculate them at much more convenience level. Note:-If you see clearly whenever you are using build-in framework yours Image aspect Ratio( Frame Aspect Ratio ) as well as pixel aspect ratio is also fixed, you have no option to change it at any cost. But if you switch on to custom where you can use any framework you want your Image and Pixel ratios are open to you. The main criteria in using this is changing the image aspect ratio will affect only to your height or Y-level system, but if you change pixel ratio your image ratio changes and it does not affect your width or height. This options are animatable you can change FOV for that. I think this is enough for everyone to know about resolution. Note:-RESOLUTION is always dependent upon PIXEL and vice-versa. Let see about PIXELS:- I don't think i should say or go deep into pixels everybody knows about it and much more then me obviously. PIXELS:-Pixels are the smallest or tiny elements in our scene or say discreet points of lights. Pixels are always into 2-D shape .I'll take about 3D Pixels later this stage. Pixels are dimensionally arranged in grid system. Now this pixels stores lots of information of what is rendered in our scene. Not only in digital world but also from real world too. Considering an image of 800*600 of framework contains 480000 pixels overall(remember resolution !!), now each pixel is a data of element itself and that element can be image or a video. where as more data provides accurate visual of the element(s).The name convention is more like we have been watching since childhood but we have never encountered until we got into this field. Pixel comes from Pitcher Element. As i mentioned earlier each pixel stores data or portion of that element. Multiple portions when formed we se the element. You manually or digitally cannot destroy pixels or move them aside from each other..............If it happens then the word resolution will not exist ever since resolution is totally dependent upon pixels. As i said pixel stores data of various elements. I can surely say guarantying myself that Pixels are transparent. I am not contradicting any statement but its true. And you can see it for yourself in Photoshop. What happens when you hit CTRL+N and setting background content to transparent where you are not telling Photoshop to make pixel transparent but to open up the core pixels in specific resolution. Pixels cannot be in floating condition, they cannot fly , they need to be placed somewhere and that somewhere is resolution.So even pixels are resolution dependent only to stick together to form strips. Pixels are transparent in dark and light shaded system.Since white color contains white information and black is fully transparent from digital world point of view contains black color info which remains is grey, and grey mode is of two system dark and bright.So when we fill any color or bring an image over this transparent pixels , it shows us the lements.You yourself can check out whic pixel contains whihc color information. So in this way we see our respective element(s).But if we see other side if pixels were not transparent the why should we bring images or fill color in it,we should get it readymade shouldn't we ??. Each pixels is subdivided into sub-pixels 1,2,3 and optional pixel. This optional pixel can be either black or white for highlighting or creating aplha.This subdivided pixels are Red,Green,Blue pixels. The density of pixels are simulated via depthness of color.Let see more about depthness...............shall we then.................. Depth> Depth in digital system represents amount of color inside pixels. It gives us the full information of volume of color present in it. Depth is very important and are very expensive at the cost of time for signal processing. The more the depthness you setup the more the color information is feeded in our rendering engine. For example............... 8 bit contains = 16777216colors(RGB) 8 bit contains = 16581375colors(RGBA) 10 bit contains = 1073741824 (RGB) 10 bit contains = 1070599167 colors(RGBA) 12 bit contains = 68669157375 colors(RGB) 12 bit contains = 68719476736 colors(RGBA) 16 bit contains =28147496710656 colors(RGB) 16 bit contains =281462092005375colors(RGBA)...to name a few.......... So the more the depth more color info present and more time for rendering......... Now i was talking about Scanline Pixelation and what i have wrote.....its absolutely correct...... To know why scanline gets pixilated the all above fundamentals were important.......... When you render out the file feeding all info you want in scanline render engine you get image and this image came from specific framework so it is so obvious that when you load up your image sequence in you compositing application and zoomify it will pixilated......... Note:- It is a software based calculation if rendered on single machine. It is a network based calculation if rendered on servers. It is a farm based calculation if rendered using server farm.............. You are allowing scanline render engine to get you result into pixilation, for that you have no choice while that is the only one step. No options to cancel this strategy. How Line Of Scan Calculates our scane:- OK now this one pretty mathematical manner of showing scanline fundamentals. 1 unit of line of scanning digidata = pixel dimension basic we set. 1unit of pixel = 1(w)*1(h) so i unit of pixel = 1 sq.pixel. also then 1 line ofscan = 1 sq.pixel. therefore, 1 line of scan = 1(w)*1(h)=1 sq.pixel. from this we can derive that................line of scan depends upon the ratio's we set up But sub-pixels contains the actual data and not pixel, subpixels are the data that are getting stored inside container called pixel. So even, 1 line of scan can be splitted into "1 line split scan". 1 line split scan breaks main 1 line of scan into 3 lines of scan for calculating 3 sub pixels as well as optional pixel too. Scanline's Active Edge Table:- (This is surely new for all of you guys out there and you can learn much more from this). We all know that whatever we build up in 3D applications contains vertex which is the smallest element in modeling section. So the shaders we feed to model actually goes deep inside vertex. Even lighting, animation everything goes inside vertex. And render engine actually calculates vertex info and stores it. But never-the-less vertex's continuously forms edges or lines or segments which eventually forms out model. So when you hit render button, scanline calculates the edge system of each and every element present in our scene. As i have mentioned that this edges contains no's of vertex's which holds info to get our required result. Each and every render engine may that be Physical or Non-physical render Engine has an option to locate and re-locate data. We get 2 modes from this -- 1) Floating Dialogue:-This states that while rendering the scene, it pops out the window displaying direct result or passes you have amended it. 2) Disc location-Stores the information coming from edges on a specifies location on HDD or on server. The buffering is not visible. We all know how pixilation is produced right ?So during scanline processing this engine maintains a table formed by buffering the internal calculation of edges called as edge table of active line of scan. Here each pixel is a data bucket within itself. During the process of line of scan. gets intersected with the pixels or better say sub pixels automatically and all the respected data is calculated and stored dramatically. Activation of edge during the line of action contains information OK.All the data is stored in vertex by mutual bounding of at least 2 vertex's and multiple of this forms a chain called as a vertex chain. Now this vertex are shared via wires so they also share data known to shared vertex data. I hope you guys have got a little information of how this scanline render engine works finally..................it has taken nearly 8 months working for this and thanks to TD group for giving me an opportunity for delivering this answer to quest season. But you know from this we can derive many technical definitions of rendering.............. Allocating vertex information called as vertex rendering. Rendering is the process of sharing vertex info throughout time segment Rendering converts 3D algorithm into2D algorithm. Rendering is a process of converting Vertex Algorithm into rasterization.

Difference between FG and GI....small overview
To understand what is the “difference between Final gather and Global illumination ?”. …..jumping directly over it makes no sense for me ……. The two words FG ( Final Gather ) and GI ( Global Illumination)-> comes from LIGHT, the mainframe is the LIGHT…FG and GI doesn’t exist without LIGHT. So in order to understand FG and GI we should first know “What is LIGHT ?”………… “What is LIGHT ?” A very common question asked by all the lighting artist who specially are into  training section. Now LIGHT has various definitions out there……!!!!...To all those people who worked and stress over definition do not grow………they rely over it….but usually it is better to get started with definition. So “What is LIGHT ?” :--- Light is an heat energy in the form of “electromagnetic radiation”…sound creepy…let see them shall we… Electromagnetic Radiation states for Electric substances and Magnetic Power……Lets bring or go into little deep into our science class……… As we all know that electric is negatively charged entity while that of magnet is an positively charge entity……As when this two of them gets attracted towards each other…they collide with themselves in linear manner, and when they collide a kind of fusion is forms since they comes with some force towards each other that impact produce heat and in that heat another entity is formed which we call as “energy”……………………….And when this so called “energy” strikes the objects that comes into their contact that energy is stored over the surface of that object(s). And as a result we see the object(s). This energy has been named as “photons”…..I’ll talk about photons later………… Each and every light has its core fundamentals for radiating rays………..rays are nothing but beams travelling only in straight direction(s) unless and until they are diverted via some obstacle….So now coming to our main concept of FG and GI…………………to know them well we should know about their individual properties………….and if it makes any difference we can segregate them. GI ( Global Illumination ) :- This is an further development of Radiosity where light not only radiates from light source but also from objects too. Now question arises what is radiosity ? Radiosity :-Now radiosity is a kind of heat energy which leaves the surface per unit time per unit area.This is a primary section of GI algorithm coded inside 3D CG. Radiosity is used basically to solve rendering equations, but not completely.Radiosity depends upon the viewpoint since its calculation in 3D CG changes when camera angle changes too. Global Illumination ( GI ):- This is the further development of Radiosity where light not only radiates from the light source but also from the surface of the objects too. A I have mentioned earlier light shoots out rays ( beams ).Let me give you guys a simple example here……. If I have a scene containing one plane acting as a ground or a floor, any object let say a metal ball and a light. So, as I have mentioned earlier too that light is an heat energy which shoots out or say radiates bunches of beams in all directions. So now taking further when light rays are shooting out they come in contact or say they hit the metal ball, these light rays or beams gets distributed in various directions. Remember what I have said earlier that light rays travel in one direction, its direction changes until it is diverted via some external elements acting over it. Mainly what happens in GI is it is used in Interior Lighting Simulation. To form a real world scenario GI is used. Now light rays cannot be seen first of all, second is they get bounced off when they are obstructed by any element(s) present in their way. Light rays are distributed in the form, of multiple bounces. This rays are called as “Optical rays” which contains some kind of energy in it. Considering the same example of metal ball and floor with one single light source, we’ll now add box so called as interior field for them. While that of light is outside the box. Now when rays are travelling it enters our box it is then acted by elements present. Here these light rays when fall over the elements checks out for the “physical properties”, while these physical properties are diffuse , specular, bump, displacement, SSS , reflection ,refraction , etc to mention a few. After checking out this physical properties light rays are then diverted or better to say get bounced off. In-short this light bounces totally depends upon the physical properties of all those elements present in our scene. In our scene we  have metal ball and this metal ball has metallic property so, high level of reflection , diffusion , ambience ,and little bump. So now light rays will calculate this properties and in accordance, it will bounce off. Here what we get is the various form of angles that are getting formed when rays are bounced off from the elements. Now when the light first enters our scene this rays gets collided with our elements, and gets bounced off…….This bounce of light is called as “Primary Bounce”. And this primary bounce of light rays contains huge amount of energies. This energies when collide at first level in our scene gets stored over the over all surface in our scene. Remaining energies gets bounced off from our surface of all elements into another directions until they are collided with secondary sources present anywhere. And when they collide they are known to be secondary bounce. So in this way all the energies are consumed by all the elements present in our scene by multiple bouncing of light rays ( light beams ). They major important thing that happens is during the first or better say primary bounce most of the energy is stored. As light is also a continuous form of heat energy, rays emitting from the main source will be emitting large amount of substances within it. Now it is time for “Photons”…..Now Photons are  “elementary particles emitted from the rays during the fusion of electric and magnetic substance ( electromagnetic radiations )”. They are also called as packets of energy .Now this light packets or photons when travel in rays contains energies and this energies are called as photonic energy. Now when these photons hit the object(s) surface this photonic energies are stored on an object(s) surface we see elements. Also when these photonic energy comes in contact with all the Cg elements in our scene, their energy is not stored just anywhere , they require some sort of storage component and that component is “map-photon map” .This photon map is the container for storing photons or say better word is “energies from light source.” One photon map is made up of 3 closed sides ( to minimum ). The much denser this map goes, the much denser or huge amount of energies are stored in it. This kind of light bouncing or better way called as energy bouncing and consuming helps our scene to get into realism world.

Final gather( FG );- Even FG is one method of illuminating our scene with the same light source. FG is specially use for rendering out Exterior scene. Final Gather is a one technique for evaluating GI by sampling process. Even in FG we have the process of multiple light bouncing where energies are stored and radiated too.

The main reason for exterior one and interior one like wise using GI for Interior and FG for Exterior is not always the same. Special purpose of using this functions affects nothing to energies radiating out. The light rays hitting the first level is direct illumination since there are no other obstacle coming in-between, volumetric particles are calculated differently( later this portion ). Now let us see some basic difference …………………… 1=FG increases the quality of GI solutions. 2=In both cases ( FG and GI ) uses path tracing (a Monte-Carlo Method ) 3=GI is always computed by photon density ..,i.e., energy. 4=GI uses more of In-Direct Illumination.(vice-versa may happen in some cases ). 5=FG uses more of Direct Illumination. (vice-versa may happen in some cases ). 6=FG can be optimized via GI, but GI cannot be optimized by FG. 7=Playing around the sampling radius and merging out photons allows GI to reduce artifacts. 8=Direct option for artifacts in FG. 9=FG and GI both uses defined volume provided with direct and in-direct illumination. 10=FG automatically uses skylight properties where-as GI doesn’t. 11=FG in Scanline is well-known for Light Tracer and GI in Scanline is good for Radiosity. 12=FG = Light Tracer = same result but different render engines. 13=GI = Radiosity = same result but different render engines. 14=In Mental ray FG and GI both can be simultaneously used at a time, but Light Tracer and Radiosity in Scanline cannot be used. 15=FG and GI doesn’t provide options or tools for post light effect like shading some corners or highlighting a particular area. But Radiosity provides that too and Light Tracer do not. 16=GI is self-computed during rendering, FG is pre-computed and can be seen with full-processing statistics in percentage. 17=FG is not dependent upon GI where-as GI is dependent upon FG. 18=FG points changes from static to animation. 19=In GI calculation more than 2 entities should be there for photon bouncing surface to surface where-as FG doesn’t. 20=For removing the artifacts in animation computed by FG, FG can be diagnosed. 21=BRDF is very well supported by FG and GI. Scanline neither calculates FG nor GI, what it calculates is just the basic information feeded to our render engine with some basic scenario. For Mental ray, always primary bounces are computed at first glance but in V-ray if Light cache and Indirect Illumination is set secondary bounces are computed first. Filters used in MR ( Mental Ray) are the same filters used in V-Ray and the same filters use in Scanline. As a matter of fact the function of this filters doesn’t change but the render time changes. Quality of filters from render engine to render engine may differ depending upon how your light bounces are calculated pre or post level. Maxwell Render is an Physical Render Engine just like Mental ray is but V-ray is not physical render engine. This physical render engines are so called since they know how to compute and render out each and every light properties feeded in the render engine a per the real world look’s. GI in Maxwell Render Engine takes pretty good amount of time but to compute FG less time than GI. Maxwell render’s GI or FG is calculated in crispy manner so in-short Maxwell Render render’s out the scene in crispy level. MR and V-ray goes for bucket rendering. Now let see something about volumentrics in light bouncing………………..Well as we know that how light bounces in and out and radiates amount of huge energies for computing Interior and Exterior scene, the same thing happens out for volumentrics.But that light bouncing happens so fast and vigorously that it is hardly even notices by our system and when it is noticed …..crashed…….So for rendering such a volumentric light bouncing we go for Volume Rendering option…………….. A very important system while light bouncing for FG and GI is…….both of them calculates Direct and In-Direct Illumination……FG can also be use in Interior and GI can also be use for Exterior….But specially for Exterior scenes multiple bouncing of beams time differes than Interior as  elements present in our Interior is far more closer then our Exterior scene. Forgive me if I went wrong somewhere…..I treid to explain a little bit about Lighing from FG and GI system…..

Circular Circle
This is far more convincing quest till now but anyways in near future more will appear from me If you check out the conditions I have explained that I need experience not copy paste answer. So here it goes. Once again you’ll notice that my answer will be like a book been printed out. So from here you’ll need more coffee for late night study. Ha Ha. So I think we should get it started. As this Quest’s answer will be far more technical but I’ll try to get it as simple a I can. And if I have missed pout something do let me know so I can edit them and get back to you. Also I apologize in advance for posting this quest answer a bit late again thanks to my hectic schedule now that I’m planning to shift in Goa-Vasco for some work. So let’s get started out…………………….shall we then……… Very surprisingly we work in 3D come work in 2D and some work in Compositing applications. But what we miss is their basic fundamentals that we were taught in our school. Those who knows this fundamentals do not take our field as an career. That is what I have seen from my eyes. When you are learning the basics of 3D you are taught the GUI( Graphic User Interface ) of that particular application. But what they do not teach or explain is that section how they function behind all these creations. RIGHT. In my last quest I mentioned definitions are get going for starters but practical approach comes much late for them and when it comes they are already in professional state. Getting back to the quest and GUI let see what lies behind all these systems !!!!!! Whenever you model you start from basic object then for developing them into your result you need some XTRA functions that you can carry out at ease. Just like Editable Mesh, Poly , Patch , Nurbs. Each of these names contains some additional functions for artist to get on with advance features in modeling category. What we merely start with is vertex, edges , faces , polygons and elements RIGHT AGAIN. YEAH. So to understand these subjects let see one by one the following topics listed below……………

1:- Vertex, Edges ,faces, polygons. 2:- Fact v/s Practical Approach 3:-Conditional Pipeline 4:-Excellent network branch breakdown. 5:-Statistics. 6:- Analyzing graphical Structure. 7:- Cartesian Network. 8:- Development Stage. Getting star with our first section…………. A.1:- VERTEX :- Now vertex has many definitions from Mathematics and Physics POV. VERTEX is known to be the single floating point in our CG world ( I am stating it to be floating point since I am starting from it. )But how vertex is formed out .QUESTION?? YEAH..So to form a single vertex you require a couple of lines passing through each other and then meeting at one section where this two lines intersect with each other. And when this lines intersects with each other appoint is automatically formed out. We call this to be vertex. Again HOW.A vertex is like a vector kind of showing the directions of from where these lines have come and gone. Let us see from modeling section. Vertex is the smallest element in modeling category, just like atom in physics world. But vertex is a SOLO property within outer world. All the data is stored out in vertex where you can then easily deformed your element by handy tool set. But vertex is not alone, it is CYCLED. Now do not get confused out by solo property and vertex is not alone. ALRIGHT. But as I said that vertex is formed by minimum2 lines out there, so vertex is not alone , it is holded by someone outside. And well this components are very much required. And even the components that holds vertex are also holded by some other components too. So see they are interconnected with each other. This vertex’s are very vital substances where data is stored.

A.2:- EDGE :- An edge is a connector of minimum 2 vertex’s or 2 end holders. An edge is a line or segment which contains infinite amount of points. But we can prove it that this infinite points can be calculated. I do not know how many scientist out there have done this before but what I know is that there are not tools available to calculate and store all this points at once. Your calculator and computer management memory will actually run out of memory when you’ll feed and tell system to compute these infinite points.What we have learned from our past is line is made up of infinite points and a line is connected by two end points.

To me to get networks………. We have a line from A->B, and A-B contains infinite dots. Taking or say cutting a small section inside A to B ( plz check out my Album Quest3 for screenshots to understand better ) we are selecting few dots from infinite dots….And what we get is another line containing infinite dots….so again cutting two points from the above cutted segment, these two dots also contains infinite points…..now what the HELL is this I’m talking about. It makes sense actually. Recently cutted two dots also contains infinite dots which we see as a line. But a line does not exist. We can prove this. HOW !!!!. OK..Now we have tow dots connected at end which contains infinite amount of dots. RIGHT. So this infinite amount of dots also contains another infinite amount of another dots…but from human eyes we are not able to see that tiny dots individually but what we see is only LINE. If somebody in future builds up a lens that would even reach inside a line then it is possible to see all the infinite amount of dots connected by a line which contains another infinite amount of dots eventually connected by a line which also contains infinite amount of dots. And this continues till you cut your main segment into multiple segments going in and in and in…….no end till you find the last single dot. This is why we say “A line consist of infinite points”, and what we have proved here is that these infinite points can be calculated too...........

Pros:- All the infinite points can be calculated. Cons:- No tools and lenses available that can reach inside a single line. One of the main thing that I forgot to mention above is that a line dose not exist. A single line when connected by 2 dots containing infinite dots inside line, which means our naked eye does not have a capacity to look deep inside the line. Take a quick revieve………………….. 1] A line connected by end dots. 2] A line contains of infinite dots. 3] This infinite dots are connect by another infinite dots, which are connected by another infinite dots , are connected by another infinite dots , are connected by another infinite dots ,……………..and it goes on. 4] These process continues till you get the end. 5] We can calculate all those infinite dots ( no tools available and memory –instability ) 6] Proved that there is no line but dots connected with dots we see line. Now if you get these we’ll continue to check out the network. As we see that how we broke the rule of stating out the scientist that there are infinite points-> but can be calculated. A line connected by two end points DOTS. These dots holds infinite dots, connected with another infinite dots and this process goes on till you breakdown minimum two dots. HOW ?? I take a line A and B which contains infinite dots. From those A and B I select minimum 2 dots, I zoom in to visualize myself what lies inside these selected 2 dots ., and what I see is these tow dots which I have selected contains another infinite amount of dots which we see as a line. Again I do select another 2 dots from this line. Again what we see is a line containing infinite amount of dots. So as we set our self by dividing tow dots and see infinite dots. If this process goes on we get infinite dots. This kind of network is called as “Multiple Network Branching Of Dots.” [ plz see the album Quest3 to understand it ] Again we prove that there exist a network of branches of dots connected by dots connected by dots from a given single line which are supported by end dots. Basic Conclusion:- As far as we have broken down some rules of one line is connected by two points which contains infinite points but we see as a line. But this line is nothing but only dots connected by dots, giving us a network. Proving a line does not exist just dots. This is a mutual bonding of points. A.3:- Faces and Polygons:- From the name itself faces or face explains the area that is being viewed. When you say face you say surface which is surrounded face. Always you won’t find any face open. If you find open face you’ll get to see that as dot. ( I’ll explain dot as a face a bit later ). Considering a face contains minimum 3 sides, this 3 side are connected by 3 edges. This 3 edges are connected by 3 points. When this 3 points are connected by each other you get a line joined. When this lines are joined you get a closed surface. This closed surface is eventually called to be FACE surrounded by 3 lines. Dots-to-dots connection we get edge and edge-to edge connection we get closed are known as face. But if you find a open face not connected by 3 sides will be an individual lines ( floating lines ). A face cannot be seen from both the side depending upon the beam tracing on it. Now that ray tracing can change the view of face depending upon the space. Now the question comes is POLYGONS. So a polygon is a kind of a figure containing more than 3 faces at a time. Poly refers as multiple and gons as faces……………………………………… It is bounded by a closed circuit in cyclic manner. Multiple bodies or faces when connected with each other we get polygons. Any polygon can be regular or irregular in shape, has as many corners as it has sides. If you see these polygons more clearly we get to see angles. We get only two angles interior and exterior ones. This interior ones will have only two angles at at time not as exterior one it has many…….. Let us se how these substances helps us in our CG world……………………shall we then……….. 1] Modeling:- When we model any entity we use subjects like vertex, edge, poly, face. Whatever we do or amend any external things all is stored inside vertex and not polygons since vertex is the smallest element in modeling. When all this data is working on, they are stored inside vertex but it needs container vertex container. But whenever you model anything all data is stored in it and when new things are applied the old data is erased out completely. What we see here is that the vertex holds only one data at one time. But when you apply any subdivision system only normal’s are applied for computing the scene. 2]Shading:- A vertex is a solo performer which cannot act without an audience. It requires an stage where it will hold data. A vertex holds out shading as well as texture rapper. how a look shall be is stored in it also along with its physical properties. 3]:- Lighting:-Crucial part when baking texture and all related data, multiple light bouncing 4]:- FX:- Simulations and FX baking out becomes one of the vital part in vertex formation specially when you do lots of volumetric rendering. 5]Rendering:- For rendering plz refer to “How Scanline Render Engine Works”……………. Now the crucial section comes into play:à I did not mention earlier that the vertex a single vertex is actually made up of polygons. Now this becomes really confusing. Consider a single dot from a single line. And also consider that the era has arrived that the scientist has designed such a lens that can pass through a line, it means we can actually see what lies inside a single particular line. So on that basis we take or pick up a single dot from a line and explore it. Let us consider that a dot taken is a circle drawn on a paper. A circle is made up of radius ---a defined radius. Radius A to B = 1 unit, so the diameter will be 2 unit. A=B=1 unit. Now as we did earlier that a line consist of multiple dots so each and every dot should contain the same unit. If it doesn’t contain the same unit we won’t see a smooth line ( not the straight line ). For example you guys take a paper and draw a circle of 5 units any units. Make two parallel line and place a circle between them. Copy this circle at same unit horizontally. What you will notice is that a circle which contains same unit doesn’t crosses those two lines. Taking the same example and a same circle make another copies of it but this time use different values for radius and place between two lines, what here you’ll see is these circles will cross two lines, which is not said to be smooth line. Lets revive a little bit….A line is made up of dots connected by dots. Then a face is connected by 3 lines but a line is dots, so eventually multiple faces going to be polygons which comprises of huge amount of dots. If a dot has radius then we can break this dots into pie like structure. If a dot is bifurcated into multiple pie’s like we cut the cake from a starting section to the middle one. RIGHT. We get multiple pie’s ---what my point is remember point is the solo performer but some secondary source is required to hold point. That’s my point. As we have presumed that a line is connected with a line then a dot is also connected with dots. So here when zoomed of single dot we get to see the circle as a dot will always look like a circle not a square or pentagon like wise. If a dot is circle then it going to have lines connected with it. And if it has lines then it has to be dots connected within itself. What we see over here is fairly simple. If a line is made up of dots, also contains sub-dots within it. Also this sub-dots is connected with a line that is dots and this procedure goes on again and again till you push further. What I mean to say is if a dot is connected by lines then it has to go with faces. Remember we have cut the dot in pie like structure and a pie is a closed surface like face is a closed surface.YEAH. [ Plz chq the Quest album for better understanding ]

So here what we derived is taken a single dot consist of sub-dots c0nnected by lines, joined together by lines to form closed area and this closed multiple area forms polygons. As I am not going xtremely depth as a self acclaimed here, in this one you can try yourself too. Lets take a quick revision here………………………….. A line connected by dot A line consists of infinite points. All the infinite dots can be countable as well as readable unless the tools are available. Broken a single line into dots distribution into multiple sections. Multiple dots are connected by lines which are eventually nothing but dots which are again interconnected with it. Taken a single dot we get to know its dimension. This single dot can be further seen to form a closed surface by the means of dots forming the cyclic process throughout. Finally just like multiple networks of lines branching we also get the multiple networks of dots branching. OK, so from the overall structure and the technicalities of mathematics and physics we got an closed surface. We can also use “Trigonometric Functions” for various angles formation too. So this dots contain values. Values given by Trigo Function. Multiple angles gives dot-faced-polygons

Your submission at Articles for creation: User:QuantumFrameworks/sandbox (April 12)
 Your recent article submission to Articles for Creation has been reviewed! Unfortunately, it has not been accepted at this time. Please read the comments left by the reviewer on your submission. You are encouraged to edit the submission to address the issues raised and resubmit when they have been resolved. ''' Thank you for your contributions to Wikipedia! '''
 * If you would like to continue working on the submission, you can find it at Wikipedia&.
 * To edit the submission, click on the "Edit" tab at the top of the window.
 * If you need any assistance, you can ask for help at the [//en.wikipedia.org/w/index.php?title=Wikipedia:WikiProject_Articles_for_creation/Help_desk&action=edit&section=new Articles for creation help desk], or on the [//en.wikipedia.org/w/index.php?title=User_talk:Aggie80&action=edit&section=new reviewer's talk page].
 * Please remember to link to the submission!

The Ukulele Dude - Aggie80 (talk) 16:08, 12 April 2014 (UTC)
 * You can also get real-time chat help from experienced editors.

Your draft article, User:QuantumFrameworks/sandbox


Hello QuantumFrameworks. It has been over six months since you last edited your WP:AFC draft article submission, entitled "sandbox".

The page will shortly be deleted. If you plan on editing the page to address the issues raised when it was declined and resubmit it, simply and remove the  or  code. Please note that Articles for Creation is not for indefinite hosting of material deemed unsuitable for the encyclopedia mainspace.

If your submission has already been deleted by the time you get there, and you want to retrieve it, copy this code:, paste it in the edit box at this link , click "Save page", and an administrator will in most cases undelete the submission.

Thanks for your submission to Wikipedia, and happy editing. JMHamo (talk) 15:26, 24 October 2014 (UTC)