User:Lmp*1524/sandbox

INTRODUCTION TO VISUAL LANGUAGE

Visual Perception: How we see and perceive People absorb almost of the information about their environment through the eye. Light here is not only a precondition and medium for seeing, but via its intensity, distribution and properties it creates specific conditions that influence our perception Overview of the topic visual perception: 1.	What is meant by visual perception? 2.	How does visual perception work? 3.	What is visual perceptual psychology?

1.	What is meant by visual Perception? Visual perception is a psychological process by which people inform themselves about the objects their environment through the medium of light, and which is fundamentally influenced by the three factors of light, object and perceiving subject. 2.	How does visual perception work? In the eye, an inverted image is projected onto the retina via a deformable lens. From the retina the image is then transported through the optic nerve to the brain. The brain turns the recorded image around again and processes the information. This principle is also imitated by a camera- the aperture assumes the function of the deformable iris. Image sensors are modelled on the light- sensitive retina. Here as well, the image is upside down and is shown on the display the correct way up again due to computing. In the human, eye there are differences between the perceived object and the image on the retina: the image is spatially distorted by projection onto the curved surface of the retina (spherical aberration). Also, the spectral colors are reproduced incorrectly (chromatic aberration). This occurs because light of different wavelength is unequally refracted, causing rings of color around the objects. The brain removes these aberrations during the further processing to her image. 3.	What is Visual perceptual psychology? Perceptual psychology is a branch of science concred with the various aspects of visual perception, especially neural reception and the processing of sensory stimuli. To comprehend optical perception, the process of building up visual impressions is of particular importance. Apparent mistakes enable an analysis of the modes of action and objectives of perception. On the one hand we have constancy phenomena. Constant objects generate images of different shape, size and brightness distribution on the retina due to changes in eg. Lightening, distance or perspective. The mechanisms of constancy perception compensate for these deviation retinal images. Before objects are assigned properties, they must first be recognized, ie. Distinguished from their surroundings. F…………………

What makes up our visual perception? Visual perception is the ability to process and interpret visual information This can be broken down into subcategories : 1.	Visual Discrimination – Helps us to see the similarities and differences between objects. 2.	Visual memory- helps us to recall characteristics of an objects 3.	Visual figure Ground – helps us to find and subject in a busy background 4.	Visual spatial relations – helps us to recognize the relationship of an object’s position to one another. 5.	Visual closure – helps us to visualize and object’s form when given parts of the picture 6.	Visual sequential memory – helps us to recall the order of a sequence of objects 7.	Visual form constancy – helps us to manipulate, rotate, or flip objects in our minds and know if it is the same or different.

………… (Building block of visual perception – picture)…………. Meaning of Visual Perception Beyond just sensing something in our environment, different parts of our bodies also work together with our senses (like seeing lump in the road) to interpret what we are sensing: the different visual, auditory, tactile, olfactory, and gustatory information we receive form the environment. These are called perceptual skills, and making sense of visual information is specifically called visual perception. Definition Visual perception is defined as the ability of the eyes and the brain to interpret visual data received from the environment. Perception is distinct from acuity, defined as the ability to see clearly. Difference between Visual Perception and Visual Acuity Perception is about taking the information that has alredy been received by the senses and making sense of it. The specific cells in the eye receiving the lfihtgtt signals work together with nerves travelling up to the brian to make sense of the vial data. Example •	Two people can see the same image but have different interpretations of it. Looking at the shapes in an optical illusion, one person may say that they’re seeing the shapes moving in a clockwise direction and the other person may perceive the shapes as moving counterclockwise. •	Visual Acuity (VA) Commonly refers to the clarity of vision, but technically rates a person’s ability to recognize small details with precision. Visual acuity depends on optical and neural factors. Optical factors of the eye influence the sharpness of an image on its of the eye influence the sharpness of an image on its retina. Neural factors include the health and function of the retina, of the neural pathways to the brain, and of the interpretative faculty of the brain. •	Either way, the person’s perception of which direction the shapes are moving in is completely different form their vision’s clarity. Even if both people looking at the image have 20/20 visual acuity, their perception of the image can still be different. Visual Perception Skills The skills your eyes need to use depend on the conditions within your environment. Are you sitting in a dark room with no window? Are you outside at a carnival during the day? The Visual skills you use will be different, and your perception of what you’re seeing will also be different. There are three types of visual skills that you should know: Photopic Vision You need your photopic vision skill during the daytime or in an environment with adequate lighting. Outside during the day or inside with the lights on, your eyes and brain sense and interpret the colors around you through specific cells in your eyes called one cells. Color vision Cone cells in human and animal eyes sense color by interpreting different wavelengths of light. The whole process of color perception is complicated, form the perception of light to the acceptance of the information by the photoreceptors and finally to the activation of specific neurons. Two theories that explain how we receive and interpret colors are trichromatic and opponent – process theory.

Trichromatic Theory The trichromatic color theory, also known as the three – component theory, assumes that there are three primary color’s red, green, and blue (RGB). Thomas young proposed in 1802 that our eyes have three sensors that detect different lights of light waves. Then, 50 years later, Hermann von Helmholtz proposed that the cones of our eyes function to varying wavelengths of light. The activation and combination of these three types of cone cells produce a wide range of hues. Opponent – Process Theory According to this theory, eyes sensors function in pairs: red and green, yellow and blue, and black and white. The activation of a color sensor disables its other pair. This theory explains how color afterimages and color blindness happen. When you stare at something yellow, the yellow sensor activated, and when you move your focus to a blank page, a blue afterimage appears’ a missing receptor pair, such as red and green, makes it difficult to see red and green hues in the case of color blindness.

Scotopic Vision •	You use your scotopic vision skills in dark or l\ow-light environment. In these conditions, rod cells in the eye are activated, allowing for better visual data interpretation even in low – light settings. Cone cells in your eye perceive light in well-lit setting’s. They are also responsible for perceiving color. The cones of our eyes are grouped around the retina’s center. A depression in the retina’s center called the fovea includes the highest concentration of cones. •	Rod cells in your eye perceive light in low-lit settings, but they do not detect colors. These cells are distributed all over the retina and are solely responsible for detecting black and white. The light-sensitive molecule rhodopsin, found in rods, is crucial for allowing vision in low-light conditions.

Mesopic Vision Mesopic vision, or twilight visual perception, is the skill to detect and interpret data in semi-dark settings. Street lighting at night, outdoor night settings, and other environments with lower-light lamps will activate this visual skill. Experts believe that mesopic vision combines the use of rod cells and cone cells, or scotopic vison and color perception. Depending on your setting, your eyes use different skills to receive information from your environment. How well these skills work varies from one person to another. One person may have an excellent photopic vision but struggle with color perception. Those who are color blind have a specific type of visual perception disorder involving the eye’s cone cells.

Color Blindness Color blindness is the inability to see and tell color like most people. When one tor two color sensor’s (cones) of the eye is missing or faulty, it may be hard to detect colors. Rare instances of severe color blindness occur when only shades of gray are present. In mild color blindness, it is easy to see colors in good light conditions. Color blindness usually interferes with both eyes, and it remains stable over lifetime.

Visual Perception Disorders

Vison dominates all our senses, and we rely on it more than the others. Visual perception disorders involve difficulties with the interpretation and processing of visual information. This is not the same as problems with vision. Visual Processing problems alter how the brain makes sense of information received through the eyes.

Noticing differences. Visual processing problems oppose a roadblock in comparing shapes, numbers and symbols, colors, and pictures. One example is in activities involving distinguishing between colors. Sequencing Images. This ability is vital in arranging images, numbers, letters or works in order and distinguishing the correct sequence. One example is in answering math problems. Co-ordinating movements. The use of sight to coordinate with body movements. An example is draing images or copying information from a chart or graph. Remembering visual material. Long term memory involves retrieving visual information received a long time ago, such as the appearance of a building from many years ago. Short term memory includes recent memory such as the face of a person you’ve just met or specific direction received minutes ago. Injury to certain parts of the cerebral cortex (Eg. Occipital lobe) that play a specialized function in vision causes visual perception disorders such as visual agnosia. Visual Agnosia is a disorder involving difficulties recognizing people and objects, even with good memory and intellectual function. Agnosia does not always make it difficult to recognize all visual inputs; some examples of inputs it affects include objects, colors, faces, and environmental sceneries, leaving others unaffected.

Importance of Visual Perception Visual Perception is vital for humans and animals to interpret and respond to visual information correctly. Some examples of visual perception in daily life include: o	Operating machinery, such as vehicles o	Writing and reading o	Avoiding danger outdoors o	Completing most motor tasks (walking, running, preparing a meal, washing, etc.) o	The ability to interpret visual features in the immediate environment makes humans and animals capable of survival and advancement through different life stages. How Does Visual Perception Work in Psychology?

There are many interpretations of how people perceive visual information in psychology. Since perceptual experiences can be quite different, many experts debate how much visual interpretations rely solely on information sensed in the environment. There are two main types of processing, according to scientists and psychologists: Bottom -Up Approach In psychology, this is defined as perception based on the data received. Visual information passes throght hthe eyes, and all organs work together to bring in signals towards the brain for the visual cortex to interpret. The brain pieces all the information tighter as sensory input comes in. Example You’re walking down a busy street, and a billboard catches your eye, detecting the billboard ad’s colors, shapes, and text. You brain combines all that information, and you perceive a hamburger from a popular fast-food chain. Top – Down Approach This is another form of visual processing where information is taken within its full context. The top-down approach involves the interpretation of sensory information from what is already knows (eg. Assumptions). Example: A person can interpret a blurry picture that seems familiar by picking out familiar shapes. Many visual perception tests and optical illusions are the basis of this approach. This processing technique perceives information based on previous knowledge, making it vulnerable to optical illusions. Two Main theories of Visual perception 1.	Top-Down processing by Richard Gregory (1970) Richard Gregory is a psychologist who theorized that perception is a constant process of hypothesis testing. According to the theorist, the ability to interpret vison relies on previous “schemas” or former experiences. Gregory explains that most of the visual components our body’s sensory faculties can gather are lost 90 percent of the time. Only a fraction reache4s the brain for processing. Practical evidence supporting the top-down processing theory includes optical illusions, where people have different perceptions even with relatively similar visual skills. The spinning ballerina is one of the most well – known visual perception activities, where people would argue whether the figure is spinning clockwise or counterclockwise. The differences in perception are a hallmark of hypothesis testing by individuals with various schemas or former experiences. 2.	Bottom-up processing theory by James Gibson (1972) According to another theorist, James Gibson, Sensory processes are deeply tied with visual perceptual skills. Perception is an evolutionary feature of humans that does not need constant hypothesis testing to make sense of visual data. Gibson argued that bottom-up processing is necessary for people’ survival, even without prior knowledge of that is being perceived. Essentially, he believed that “what one sees is what one gets” or that sensory information is directly perceived as it is. Optical Illusion

Color and Depth Perception Color Vision Normal-slighted individuals have three different types of cones that mediate color vison. Each of these cone types is maximally sensitive to a slightly different wavelength of light. According to the young-Helmlholtz trichoromatic theory of color vison, shown in figure 1, all colors in the spectrum can be produced by combining red, green and blue. The three types of cones are each receptive to one of the colors.

This figure illustrates the different sensitivities for the three cone types found in a normal - sighted individual. Depth Perception Our ability to perceive spatial relationships in three dimensional (3-D) space is known as depth perception. with depth perception, we can describe things as being in front, behind, above, below, or to the side of other things. Our world is three-dimensional, so it makes sense that our mental representation of the world has three- dimensional properties. we use a variety of cues in a visual scene to establish our sense of depth. Some of these are binocular cues, which means that they rely on the use of both eyes. One example of a binocular depth cue is binocular disparity, the slightly different view of the world that each of our eyes receives. To experience this slightly different view, do this simple exercise: extend your arm fully and extend one of your fingers and focus on that finger. Now, close your left eye without moving your head, then open your left eye and close your right eye without moving your head. You will notice that your finger seems to shift as you alternate between the two eyes because of the slightly different view each eye has of your finger. A 3-D movie works on the same principle: the special glasses you wear allow the two slightly different images projected onto the screen to be seen separately by your left and your right eye. As your brain processes these images, you have the illusion that the leaping animal or running a perosn is coming right toward you. Although we rely on binocular cues to experience depth in our 3-D world, we can also perceive depth in 2-D arrays. Think about all the paintings and photographs you have seen. Generally, you pick up on depth in these images even though the visual stimulus is 2-D. When we do this, we are relying on a number of monocular cues, or cues that require only one eye. If you think you can’t see depth with one eye, not that you don’t bump into things when using only one ye while walking - and, in fact, we have more monocular cues than binocular cues. An example of a monocular cue would be what is known as linear perspective. Linear perspective refers to the fact that we perceive depth when we see two parallel lines that seems to converge in an image (Figure 3). Some other monocular depth cues are interposition, the partial overlap of objects, the relative size and closeness of images to the horizon, relative size and the variation between light and shadow. What is 3D? Three - dimensional space, or 3D, refers to the physical world around us that we can see, touch, and interact with. it is characterized by three dimensions - length, width and height. We can use these three dimensions to describe the position of objects in space, their size, and their shape. For example, we can describe the size of a table using its length, width and height. Three - dimensional space is also used in computer graphics and animation to create lifelike images and videos. in this context, 3 D refers to the creation of three- dimensional objects and environments using computer software. the software can be used to create models of objects, which can then be manipulated to create animations or still images.

What is 4D? Four - dimensional space - time, or 4D, includes time as a fourth dimension. this means that it is not just space that is being described, but also the movement of objects through time. in physics, space - time is used to describe the universe, and it is a fundamental concept in the theory of relativity. in 4D space - time, an object’s position in space and time is described using four coordinates - three for space (length, width, and height) and one for time.

Differences: 3D and 4D One of the key differences between 3D and 4D is that 3D is static, while 4D is dynamic. in other words, 3D describes objects and environments as they are, while 4D describes how they change over time. For example, a 3D model of a tree would show it in a single moment in time, while a 4D model would show how the tree grows and changes over time. Another difference between 3D and 4D is that 3D is more tangible, while 4D is more abstract. 3D objects and environments can be physically interacted with, while 4D space - time is more of a conceptual construct. It is a way of describing the universe, but it is not something that we can touch or see directly. Despite these differences, both 3D and 4D have important applications in a wide range of fields. In addition to computer graphics and animation, 3D is used in architecture, engineering, and manufacturing to create designs and prototypes of buildings and products. 4D, on the other hand, is used in physics and astronomy to study the behaviour of the universe and to make predictions about its future.

Colour Psychology 	Color is a powerful communication tool and can be used to signal action, influence mood, and even influence physiological reactions. Certain colors have been associated with physiological changes, including increased blood pressure, increased metabolism and eyestrain. 	Color psychology means and how colors affect the mind and body. It also explores research on the effect of color and the psychological reactions people may experience.

Color can hold an immense amount of power over us. A historical anecdote that undoubtedly illustrates this, is the story of 20th century artist Barnett Newman’s painting, Who’s Afraid of Red, Yellow and blue ill. The 18 foot wide painting consists of a thin blue strip on the left hand-side and a thin yellow strip on the right, while the rest of the canvas is painted a blaring, bright shade of Red.

In the 1980’s, the piece was displayed at Amsterdam’s Stedelijk art museum. It ended up causing much dispute and vehement reactions amongst its viewers. This conflict came to a peak when a man named Gerard Jan van Blaeren visited the exhibition. He later described feeling overwhelmingly provoked by the painting, its vivd color filling the large canvas. In fact, it had such an intense influence on him that he slashed the canvas with a utility knife, essentially “murdering” the Painting. What is color psychology ? Color psychology is the study of color’s impact on human behavior. It aims to understand why and how different hues affect our feelings, behavior and decision-making processes. it’s used in many fields, from branding and marketing, to interior design, art and more, in an attempt to use color optimally to reach a certain goal.

For example, how would you feel going to sleep in a bedroom filled with pale, neutral colors? Would a room painted in purple walls make you feel more or less relaxed? in the world of button design, are you more inclined to click on a green button or a red one? Did color play a part in the last item or clothing you purchased ? Different topics that are of interest in this area include : 	The meanings of colors 	How colors impact physiological responses 	Emotional reactions to color 	Factors that impact color preferences 	Cultural differences in the meanings and associations of different colors 	Whether colors can impact mental health 	How colors can influence behaviors 	Ways that colors can be utilized to promote well- being 	How colors can be used to improve safety and design more optimal home and work environments

History of Color psychology The scientific exploration of color psychology is relatively new, but people have long been interested in the nature and impact of color, In ancient cultures, colors were often used to treat different conditions and influence emotions. They also played a role in different spiritual practices. It was in 1666 that, the English scientist Sir Isaac Newton discovered that when pure white light passes through a prism, it separates into all visible colors. Newton also found that each color is made up of a single wavelength and cannot be separated any further into other colors. Further experiments demonstrated that light could be combined to form other colors. For example, red light mixed with yellow light creates an orange color. Some colors, such as green and magenta, cancel each other out when mixed, resulting in white light.

“Given the prevalence of clolor, one would expect color psychology to be a well-developed area, “researchers Andrew Elliot and Markus Maier noted in a review of the existing research on the psychology of color. “Surprisingly, Little theoretical or empirical work has been conducted to date on color’s influence on psychological functioning, and the work that has been done has been driven mostly by practical concerns, not scientific rigor.” Despite the general lack of research in this area, color psychology has become a hot topic in marketing, art, design and other areas.

The Psychological Effects of Color Why is color such a powerful force in our lives? What effects can it have on our bodies and minds? While perceptions of color are somewhat subjective, some color effects have universal meanings. Colors in th e red area of the color spectrum are known as warm colors and include red, orange, and yellow. These warm colors evoke emotions ranging from feelings of warmth and comfort to feelings of anger and hostility. Colors on the blue side of the spectrum are known as cool colors and include blue, purple, and green. These colors are often descried as calm, but can also call to mind feelings of sadness or indifference.

Symbolic Color Meanings 	Symbolic meanings that are often associated with different colors: Red : Passion, Excitement, love Pink : Soft, Reserved, earthy Purple : Mysterious, Noble, glamorous blue : Wisdom, hope, reason, peace Green : Nature, growth, freshness Yellow : Hope, Joy, danger Orange : Warmth, kindness, joy White : Truth, indifference Black : Noble, mysterious, cold One 2020 study that surveyed the emotional association s of 4,598 people from 30 different countries found that people commonly associate certain colors with specific emotions. According to the study results: Black : 51%of respondents associated black with sadness White : 43% of people associated white with relief Red : 68% associated red with love Blue : 35% linked blue to feelings of relief Green : 39% linked green to contentment Yellow : 52% felt that yellow means joy Purple : 25% reported they assocated purple with pleasure Brown : 36% linked brown to disgust Orange : 44% associated orange with joy Pink : 50% linked pink with love Color Psychology as Therapy Several ancient cultures, including the Egyptians and Chinese, practiced chromotherapy, or the use of colors to heal. Chromotherapy is sometimes referred to as light therapy or colorology.

Colorology is still used today as a holistic or alternative treatment. In this treatment: 	Red is used to stimulate the body and mind and to increase circulation. 	Yellow is thought to stimulate the nerves and purify the body. 	Orange is used to heal the lungs and to increase energy levels. 	Blue is believed to soothe illnesses and treat pain. 	Indigo shades are thought to alleviate skin problems.

Note - Second Part THE ART OF IMAGING: SCIENCE MEETS BEAUTY Science meets beauty While art and science are often presented as opposing forces in today’s world, the aim of the two disciplines has always been fundamentally the same: to provide a representation of the real world. This intricate blink between art and science becomes more obvious as wee look to past discoveries that were made before the invention of modern scientific technologies that help us capture reality, in a time when it was in many ways necessary for scientists to be artists as well. From Greek mathematicians and philosophers such as Pythagoras, Aristotle, and Euclid debating music theory and the golden ration, to Leonardo da Vinci’s Vitruvian Man, the identity of the artist-scientist has been embodied by many throughout history. The link between art and science is especially clear in the visual domain, where artistic and scientific images alike can be beautiful, informative, and engaging in countless ways. The biological sciences and the field of bio-imaging in particular, give rise to scientific images that are often ripe with aesthetic value. From the microscopic to the macroscopic levels, scientific images of all types can capture the beauty and intrigue of scientific discovery. Even images that are not necessarily visually beautiful can stimulate curiosity and evoke emotions, and have long been used as a powerful means of teaching and communicating scientific discoveries and reaching a broader audience. Image formation: how human eye works? Before we discuss, the image formation on analog and digital cameras, we have to first discuss the image formation on human eye. Because the basic principle that is followed by the cameras has been taken form the way, the human eye works. When light fall upon the particular object, it is reflected back after striking through the liens of eye form a particular angle and the image is formed on the retina which is the back side of the wall. The image that is formed is inverted. This image is then interpreted by the brain and that makes us able to understand things. Due to angle formation, we are able to perceive the height and depth of the object we are seeing. When sun light falls on the object (In this case the object is a face), it is reflected back and different rays form different angle when they are passed through the lens and an invert image of the object has been formed on the back wall. The last portion of the future denotes that the object has been interpreted by the brain and reinverted. The art and science of imaging : Human eye and Camera Imaging, in all its diverse forms, is a remarkable and essential aspect of human experience. It is a means by which we perceive and understand the world around us, capturing moments in time and conveying messages and emotions. In the realm of imaging, two pivotal instruments coexist, each offering a unique perspective and contributing to our profound comprehension of visual perception and image capture: the human eye and the camera. These instruments represent not only the marvel of nature and the ingenuity of technology but also the artistry and the science that unite to define our visual world. The Human eye: A canvas of Art and Science The human eye, often regarded as nature’s masterpiece, is a canvas for artistry. What sets it apart is the subjective nature of human perception. Every individual perceives the world uniquely, influenced by their emotions, experiences, and cultural backgrounds. This subjectivity forms the foundations of artistry in human vision. Artists have long been captivated by the idea that no two people see the world in exactly the same way. It fuels their creativity, enabling them to evoke emotions and convey messages through visual art. How artists harness the subjectivity of the human eye: Subjectivity in interpretation: Artists tap into the human eye’s capacity to interpret and imbue meaning into shapes, colors, and patterns. A simple image can evoke a wide range of emotions and associations based on an individual’s personal experiences and worldview. Cultural Influence: cultural backgrounds shape how people perceive color, symbols and iconography. Artists use this cultural diversity to create works that resonate with specific audiences or challenge societal norms. Expressions of emotions: Through the medium of visual arts, artists can translate their emotions, feelings, and stories into images that communicate on a profound level. The human eye is their instrument for conveying this artistic expression. Science Aspect The science of the human eye delves into its intricate biological and physiological mechanisms. This scientific exploration is essential for understanding how the eye captures, processes and interprets visual information. Key scientific aspects of the human eye include: Biological Optics: Scientists study the anatomy of the human eye, form the cornea and lens to the retina and photoreceptor cells (rods and cones). This research helps us comprehend the biological foundations of vision. Visual Perception: the human eye is not merely a camera; it is a gateway to the complex process of visual perception. Research in this area delves into how the brain processes visual data, including phenomena like visual processing, depth perception and optical illusions. Evolving understanding: Advances in ophthalmology and neuroscience continue to deepen our understand of the eye. This scientific knowledge informs medical treatments and interventions that enhance and protect vison.

History of Photography Origin of Camera The history of camera and photography is not exactoly the swame. The concepts of camera were introduced a lot before the concept of photogtaphy Camera obscura The history of the camera lies in Asia. The principles of the camera were first introduced by a Chinese philosopher Mozi. Ti is known as camera obscura. The cameras evolved form this principle. The word camera obscura is evolved from two different words. Camera and Obscura. The meaning of the word camera is a room or some kind of vault and Obscura stands for dark. The concept which was introduced by the Chinese philosopher consist of a device, that project an image of its surrounding on the wall. However, it was not built by the Chinese. The Creation of Camera Obscura Camera obscura (Meaning: Dark room” in Latin) is a box-shaped device used as an aid for drawing or entertainment. Also referred to as a pinhole image, it lets light in through a small opening on one side and projects a reversed and inverted image on the other. During the 4the century, Greek philosopher Aristotle noticed that sunlight passing through gaps between leaves projects an image of an eclipsed sun on the ground. The phenomenon was also noted by 6th century Greek mathematician and co-architect of the Hagia Sophia, Anthemius of Tralles, who used a type of camera obscura in his experiments. During the 9th century, Arab philosopher, mathematician, physician, and musician Al-Kindi also experimented with light and a pinhole. Familiar with these early studies, Leonardo da Vinci published the first clear description of the camera obscura in Codex Atlanticus (1502), a 12 volume bound set of his drawings and writings where he also talked about other inventions such as flying machines and musical instruments.

How it works As the name suggests, many historical camera obscura experiments were performed in dark rooms, the surroundings of the projected image have to be relatively dark for the image to be clear. The human eye works a lot like the camera obscura; both have an opening (pupil), a biconvex lens for refracting light, and a surface where the image is formed (retina). Early camera obscura devices were large and often installed inside entire rooms or tents. Later, portable versions made from wooden boxes often had a lens instead of a pinhole, allowing users to adjust the focus. Some camera obscura boxes also featured an angled mirror, allowing the image to be projected the right way up.

During the 15th century, other artists began to see the potential of using the camera obscura as a drawing aid. However, using the device sparked controversy, as many viewed the tracing method as cheating. Over the years, Da Vinci drew around 270 diagrams of camera obscura devices in his sketchbooks. Johannes Vermeer and Camera Obscura Although there is no documented evidence to prove it, art historians have suggested that 17th century Dutch master Johannes Vermeer used the camera obscura as an aid to create his paintings. The theory is based on studies of the artworks themselves. Beneath the surface of his paintings, there are not signs that he made any corrections to his layouts as he worked. Instead, Vermeer created a shadowy image outlining the scene before painting, perhaps based on a projectedimage. The first person to publicly propose the possibility that Vermeer used a camera obscura was American artist Joseph Pennell. In 1891, he noticed that the man in the foreground of Vermeer’s officer and Laughing First was shown nearly twice as large as the girl he sat facing just as the scene would appear in a photograph Even if Vermeer did use the camera obscura to achieve photographic perspective, his talent shouldn’t be diminished.

Portable Camera In 1685, a first portable camera was built by Johann Zahn. Before the advent of this device, the camera consists of a size of room and were not portable. Although a device was made by an irish scientist Rober Boyle and Rober Hooke that was a transportable camera, but still that device was very huge to carry it from one place to the other. Origin of Photography Although the camera obscura was bult in 1000 by a Muslim scientist. But its first actual use was described in the4 13th century by an English philosopher Roger Bacon. Roger suggested the use of camera for the observation of solar eclipses.

Da Vinci Although much improvement has been made before the 15th century, but the improvements and the findings done by Leonardo di ser Piero da Vinci was remarkable. Da vinci was a great artist, musician, anatomist and a war engineer. He is credited for many inventions. His one of the most famous paintings includes, the painting of mona Lisa. Da Vinci not only built a camera obscura following the principle of a pin hole camera but also uses it was drawing aid for his art work. In his work, which was described in Codex Atalnticus, many principles of camera obscura have been defined. First Photograph Taken in 1826 or 1827 by Joseph Nicephore Niepce, the world’s oldest surviving photograph was captured using a technique Niepce invented called heliography, which produces one-of-a-kind images on metal plates treated with light sensitive chemicals. He captures the first photograph of a view from the window at le Gras, by coating the pewter plate with bitumen and after that exposing that plate to light. 1n 1829, Niepce joined forces with Jacques-Mande Daguerre, another pioneer in photography. After Niepce’s sudden death in 1833, his son Isidore continued working with Daguerre. Togher, they devised a system-which involved a silver-iodide plate exposed to mercury fumes-to produce high-quality images within minutes. The process became known as ‘the daguerreotype’ and the images produced as daguerreotypes. In 1839, in exchange for lifetime pensions, the French government bought their photography technique and it became the first publicly available photo taking system. This availability played a huge role in the development of photography as a social phenomenon. While nineteenth century photography took considerably more time than the seconds it takes to shoot and post a selfie on social media, we owe a lot to the inventions of Niepce Daguerre. The Origin of Film The origin of film was introduced by an American inventor and a philanthropist known as George Eastman who is considered as the pioneer of photography. He founded the company called as Eastman Kodak, which is famous for developing films. The company starts manufacturing paper film in 1885. He first created the camera Kodak and then later Brownie. Brownie was a box camera and gain its popularity due to its feature of Snapshot. After the advent of the film, the camera industry once again got a boom and one invention lead to another Leica and Argus Leica and argus are the two analog cameras developed in 1925 and in 1939 respectively. The camera Leica was bult using a 35mm cine film. Argus was another camera analog camera that uses the 35mm format and was rather inexpensive as compared by Leica and became very popular. Analog CCTV Cameras In 1942 a German engineer Walter Bruch developed and installed the very first system of the analog CCTV cameras. He is also credited for the invention of color television in the 1960. Photo Pac The first disposable camera was introduced in 1949 by photo Pac. The camera was only a one time use camera with a roll of film already included in it. The later versions of photo pac were eater proof and even have the flash. Digital Cameras Mavica by Sony Mavica (the magnetic video camera) was launched by Sony in 1981 was the first game changer in digital camera world. The images were recorded on floppy disks and images can be viewed later on any monitor screen. it was not a pure digital camera, but an analog camera. But got its popularity due to its storing capacity of images on a floppy disk. It means that you can now store images for a long-lasting period, and you can save a huge number of pictures on the floppy which are replaced by the new blank disk, when they got full. Mavica has the capacity of storing 25 images on a disk. One more important thing that mavica introduced was its 0.3 mega pixel capacity of capturing photos. Fugi DS IP camera by fuji films 1988 was the first true digital camera. Nikon D1 was a 2.74 mega pixel camera and the first commercial digital SLR camera developed by Nikon, and was very much affordable by the professionals. Today digital cameras are included in the mobile phones with very high resolution and quality. Camera Systems : merging Artistry and Precision Art Aspect: cameras are the tools of choice for capturing moments and stories, and they are celebrated for their capacity to merge artistry and precision. Just as the human eye offers subjective interpretation, cameras offer creative expression. Photography, in particular, is often hailed s a form of art, where photographers wield cameras to create visually striking and emotionally resonant images. The Artistry of cameras is seen in : Composition Techniques Photographers employ composition techniques like the rule of thirds, leading lines, and framing to create visually pleasing and emotionally engaging photographs. These techniques are akin to the brushstrokes of painters on a canvas. Creative vision Photographers are artists with a unique visual perspective. They use their understanding of lighting, color, and perspective to capture moments, convey stories, and evoke emotions. Each photograph is a canvas upon which they paint their creative vision. Visual Storytelling Cameras enable visual storytelling, where a single image can convey an entire narrative or emotion. Photographers play the roles of visual storytellers, capturing scenes that inspire, inform or challenge. Science aspect Behind the lens, there is an intricate tapestry of science that governs the functioning of cameras. This science encompasses optical engineering, image sensors, and image processing, each contributing to the precision and fidelity of image capture. Scientific aspects of camera systems include: Optical Engineering The design of camera lenses and optics is a field of precision engineering. Optical engineers work to ensure that lenses capture and focus light with

Camera Basics A camera is a remote sensing device that can capture and store or transmit images. Light is collected and focused through an optical system on a sensitive surface (sensor) that converts intensity and frequency of teh electromagnetic radiation to information, through chemical or electronic processes. The simplest of this kind consists of a dark room or box in which light enters only from a small hole and is focused on the opposite wall, where it can be seen by the eye or captured on a light sensitive material (i.e.. photographic film). This imaging method, which dates back centuries, is called ‘ camera obscura’ (Latin for ‘dark room’) and gave the name to modern cameras. Camera technology has hugely improved in teh last decades, since the development of Charge Coupled Device (CCD) and, more recently, of CMOS technology. Previous standard systems, such as vacuum tube cameras, have been discontinued. the improvements in image resolution and acquisition speed obviously also improved the quality and speed of machine vision cameras. Camera Components Parts of a Camera and Their Functions Modern digital cameras all have the same basis parts. Here are labelled parts of a camera, how they work, and what they contribute to the photo making process: Knowing the basic parts of a camera and its functions is essential so that you can use them to your advantage and help you maximize the camera’s potential. 1. Camera Body : The body of the camera is the base of the camera itself. 2. Viewfinder 3. Pentaprism : The pentaprism is a mirror placed at a 45 degree angle behind the camera lens. the mirror projects the light captured from the lens to the viewfinder. 4. Built in Flash 5. Lens Mount 6. Lens Release Button 7. Mode Dial 8. Focusing Screen 9. Digital Sensor 10. Grip 11. Shutter 12. Display 13. Electronics: It can divided into three separate categories. Photo capture components, Camera controller, and user interface controller 14. Remote Control Sensor 15. Shutter Button 16. Auto Focus System 17. Reflex and Relay Mirror 18. Aperture 19. Main Dial 20. Hot Shoe 21. Contacts 22. Processing Engine : It is image processor 23. Buffer: It is temporary storage 24. Function Button 25. ISO 26. Red eye Reduction 27. Memory Card Slot 28. Tripod Mount 29. Batteries Image formation on Analogue and Digital Cameras Image Formation on Analogue Cameras In Analogue cameras, the image formation is due to the chemical reaction that takes place on the strip that is used for image formation. A 35mm strip is used in analogue camera. it is denoted in the figure by 35mm film cartridge. This strip is coasted with silver halide (a chemical substance). Light is nothing but just the small particles known as photon particles. So when these photon particles are passed through the camera, it reacts with the silver halide particles on the strip and it results in the silver which is the negative of the image. How film cameras Work : The Basic Mechanics A camera is essentially a light proof box, with a tiny hole in the front end to allow light from an external scene to enter at the right time. Image Formation on Digital Cameras In the digital cameras, the image formation is not due to the chemical reaction that take place, rather it is a bit more complex then this. In the digital camera, a CCD array of sensors is used for the image formation. Image formation through CCD Array : CCD stands for charge coupled device. This is an image sensor, and like other sensors it senses the values and converts them into an electric signal. In case of CCD it senses the image and convert it into electric signal e.t.c This CCD is actually in the shape of array or a rectangular grid. it is like a matrix with each cell in the matrix contains a censor that senses the intensity of photon. Like analog cameras, in the case of digital too, when light falls on the object, the light reflects back after striking the object and allowed to enter inside the camera. Each sensor of the CCD array itself is an analog sensor. When photons of light strike on the chip, it is held as a small electrical charge in each photo sensor. The response of each sensor is directly equal to the amount of light or ( photon) energy striked on the surface of the sensor. Since we have already define an image as a two dimensional signal and due to the two dimensional formation of the CCD array, a complete image can be achieved form this CCD array. It has limited number of sensors, and it means a limited detail can be captured by it. Also each sensor can have only one value against the each photon particle that strike on it. So the number of photons striking (current) are counted and stored. In order to measure accurately these, external CMOS Sensors are also attached with CCD array. Sensor Size and Resolution An important feature of a camera is the sensor size (or format): this indicates the dimension of the image sensor and its form factor. Typically, this parameter is expressed in inches (and fraction of inches). However, the actual dimensions of a sensor are different from the fraction value, which often causes confusion among users. This practice dates back to the 50’s at the time of TV camera tubes and is still the standard these days. The common 1”c circular video camera tubes have a rectangular photo sensitive area about 16mm diagonal, so a digital sensor with a 16mm diagonal size is a 1” equivalent. Further more, it is always wise to check the sensor specifications, since even two sensors with the same format may have slightly different dimensions and aspect ratios. Introduction to Pixel In digital imaging, a pixel (or picture element) is the smallest item of information in an image. Pixels are arranged in a 2 dimensional grid, represented using squares. Each pixel is a sample of an original image, where more samples typically provide more-accurate representations of the original. The intensity of each pixel is variable; in colour systems, each pixel has typically three or four components such as red, green, and blue, or cyan, magenta, yellow, and black. This word pixel is based on a contraction of pix(“pictures”) and le (for element). The value of each sensor of the CCD array refers to each the value of the individual pixel. The number of sensors = number of pixels. it also means that each sensor could have only one and only one value. What is image Resolution ? The term resolution is often used as a pixel count in digital imaging. when the pixel counts are referred to as resolution, the convention is to describe the pixel resolution with the set of two umbers. the first number is the number of pixel columns (width) and the second is the number of pixel rows (height), for example as 640 by 480. Another popular convention is to cite resolution as the total number of pixels in the image, typically, given as number of mega-pixels, which can be calculated by multiplying pixel columns by pixel drowns and dividing by one million. An image that is 2048 pixels in width and 1536 pixels in height has a total of 2048x 1536 = 3,145,728 pixels or 3.1 mega-pixels. One could refer to it as 2048 by 1536 or a 3.1 mega-pixel image. Other conventions include describing pixels per length unit or pixels per area unit, such as pixels per inch or per square inch. This is an illustration of how the same image might appear at different pixel resolutions. Sensor types : CCD and CMOS The CCDs (Charged - coupled device) A charge coupled device (DDC) is a light sensitive integrated circuit that captures images by converting photons to electrons. A CCS sensor breaks the image elements tin to pixels. Each pixel is converted into an electrical charge whose intensity is related to the intensity is related tot he intensity of light captured by that pixel. For many years, CCDs were the sensors of choice in a wide range of devices, but they’re steadily being replaced by image sensors based on complementary metal oxide semiconductor (CMOS technology. The CCD was invented in 1969 at Bell labs- now part of Nokia - by George Smith and Willard Boyle. However, the researcher’s efforts were focused primarily on computer memory, and it wasn’t until the 1970s that Michael F. Tompsett, also with Bell Labs, refined the CCD’s design to better accommodate imaging. What does a charge coupled device do ? Small,light sensitive areas are etched into a silicon surface to create an array of pixels that collect the photons and generate electrons. the number of electrons in each pixels is directly proportional to the intensity of light captured by the pixel. After al the electrons have been generated, they undergo a shifting process that moves them toward an output node, where they’re amplified and converted to voltage. CMOS Sensor The CMOS sensor works on the principle of the photoelectric affect to change the photons into electrical energy. Not like CCD sensors, CMOS sensors will charge electric charge into voltage directly within the pixels. At present CMOS sensors are available with outstanding image quality and high frame rates, so used in high performance based industrial cameras. This readout scheme has the disadvantage to exploit a higher noise, due to the readout transistors in each pixel and due to the so called fixed pattern noise: a non homogeneity in the image due to the mismatches across the different pixel circuitries. Global and Rolling Shutter (CMOS) With global and rolling shutter the literature refers tot he way an image is captured and read out. with the rolling shutter readout scheme the exposure time is the same for all the pixel of the sensors but there is a delay between the exposure of on row and the following one. This scheme gives an image that is not all captured at the same time, but rather slightly shifted in time: this can be a problem in fast application requiring a high frame rate. Global Shutter On the contrary, the exposure time of global shutter sensors starts and ends at the same time. in this way the information given by each pixel refers to the same time interval in which the image is acquired. here, only the read out is sequential, but the voltage sampled refers to on e precise moment of time for all the array. This kind of sensor is mandatory for high speed applications. Image formation : Aperture, Shutter Speed, ISO, Depth of Field and Depth of Focus Image formation in photography is a complex interplay of various factors, including...... Aperture: adjustable opening in the camera lens through which light passes. It is measured in f-stops. (e.g. f/2.8, f/5.6, f/16) The road to properly exposing photos and videos start with aperture. while aperture is considered a camera setting, it’s really a lens adjustment, and affects two critical components of capturing an image: light and depth of field. Aperture is the opening of the lens through which light passes. a large (or open) aperture lets in more light that will hit the camera sensor, whereas a small (or closed) aperture lets in less light. Role in Image Formation Light control : Aperture controls the amount of light that enters the camera. A lower f-stop (e.g.f/2.8) indicates a larger aperture, allowing more light, while a higher f-stops(eg. f/16) represents a smaller aperture, letting in less light. Depth of Field : aperture influences the depth of field, which is the range of distances within the scene that appears sharp and in focus. a wider aperture (lower f-stops) results in a shallower depth of field, isolating the subject form the background. Conversely, a narrower aperture (higher f-stops) increases the depth of field, keeping more of the scene in focus. Creative Applications Bokeh : wider apertures (e.g., f/1.4) create pleasing background blur, known as bokeh, making the subject stand out. For-ground and Background separation: controlling aperture allows you to emphasize the subject while minimizing distractions in the foreground or background. So remember: Smaller apertures, Like f/16, let in less light Larger apertures, like f/1.4, let in more light How aperture affects exposure aperture has several effects on your photographs. perhaps the most obvious is the brightness, or exposure, of your images. as aperture changes in size, it alters the overall amount of light that reaches your camera sensor- and therefore the brightness of your image. A large aperture (a wide opening) will pass a lot of light, resulting in a brighter photograph. A small aperture does just the opposite, making a photo darker. Take a look at the illustration below to see how it affects exposure How aperture affects depth of field The other critical effect of aperture is depth of field. Depth of field is the amount of our photograph that appears sharp from front to back. Some images have a ‘thin’ or ‘shallow’ depth of field, where the background is completely out of focus. other images have a ‘large’ or ‘high’ depth of field, where both the foreground and background are sharp. On the other hand, a small aperture results in a small amount of background blur, which is typically ideal for some types of photography such as landscape and architecture. In the landscape photo below, I used a small aperture to ensure that both my foreground and background were as sharp as possible form font to back: Taken using a very small aperture of f/16 in order to remove background blur and achieve sufficient depth of field.