User talk:Ravitrivedi89

"suiside" type of  mood disorder
psychology behind suisidal attampts

To prevent suisidal attampts, we keep in our mind that we all are special kind of person. if any work is not done satisfactory by us, don't lose our hope. because there are so many type of work that we can do. so at this type of situation, we must think that what type of work is more suitable for me and do it....... another thing is, we must keep out from our " comparitive thinking"... i mean to say often we think that some person is more efficiant than me or any specific person is more claver than me. this type of thoughts are called "comparitive thinking".

keep in mind well.. we all have a special kind of personality... and we can not do all type of work..... it is the reality.........we must accept this...... some type of work is not done nicely by you & some type of work is done only by you........... HERE, i am describing all the details of psychology behind suiside..........

I. Introduction II. Who Commits Suicide? A. Age B. Race C. Sex D. Marital Status E. Occupational and Educational Status F. Psychiatric History G. Culture III.Why Do People Commit Suicide? A. Cognitions B. Loss C. Communication D. Current State E. Social Factors IV. Prevention [Film/Guest Speaker]

--- I. Introduction

The effects of Mood Disorder on an individual's life can be profound and devastating. The DSM-III-R list of symptoms make this clear: behavioral, cognitive, physiological, and emotional consequences are all an aspect of the Mood Disorders. In addition, there are often relationship difficulties, occupational problems, and substance abuse. The potential and most tragic consequence is self-destruction. (Depression is certainly not the only cause of suicide - Suicide is also clearly associated with Schizophrenia, substance abuse, and some of the personality disorders, for example).

Some estimates (Carson, et al., 1988; Klerman, 1982; Wekstein, 1979):

Suicide is one of the leading causes of death in Western countries, and rates are on the rise in the U.S.

In the U.S.:

200,000 persons attempt suicide each year

27,000 successful each year (about one every 20 minutes)

A major cause of death for adolescents and young adults (10,000 or  more college students every year attempt suicide)

[These numbers are undoubtedly underestimates - suicide is often kept secret by family and even by professionals]

II. Who commits suicide?

Depression is clearly associated with suicide: perhaps more than half are depressed (Barraclough, Bunch, Nelson, et al., 1974; Murphy, 1983). The lifetime risk of suicide for people with Mood Disorders has been estimated to be anywhere from 15% to 50% to even 90% (Murphy, 1983). Note: Much of what we know about the characteristics of suicide is based on "psychological autopsies":  interviews with close friends, relatives, doctors, etc. conducted shortly after the death. Problems with this approach include things like memory biases, intentional distortions.

Other characteristics of people who commit suicide:

A. Age: Suicides are attempted by preteens to the elderly, but rates are highest for people between the ages of 45 - 60 (Davison & Neale, 1986).

B. Race: Suicide rates are greater for whites than nonwhites (Firestone, 1986; Seiden, 1984).

C. Sex: Three times as many men as women succeed at killing themselves (Davison & Neale, 1986; Firestone, 1986). But three times as many women as men attempt suicide (Davison & Neale, 1986). This is because women tend to act impulsively, are relatively public about their attempts, and tend to use relatively less serious means. Men, in contrast, tend to give prior warning signs of their intentions (indicating that the act was not purely impulsive, but thought out in advance), and they use highly effective methods (eg: a gun, jumping from a high place) (Clayton, 1983).

D. Marital Status: People who are single, divorced or widowed are 2-3 times as likely to commit suicide as married people, especially among men (Clayton, 1983; Murphy, 1983). For married women (esp. ages 20 to 30), there is some evidence that they are more likely to attempt suicide than single women (Shneidman & Farberow, 1970).

E. Occupational and Educational Status: White collar workers are more likely to commit suicide than blue collar workers (Firestone, 1986). At particular risk are white males with affluent backgrounds (Seiden, 1984), for example: psychiatrists, psychologists, physicians and lawyers (Davison & Neale, 1986). In addition, college students who excel in academics are more likely to commit suicide, perhaps because they tend to be the most self- critical (Seiden, 1966; Firestone, 1986).

F. Psychiatric History: Individuals who have mental health problems and individuals who have been hospitalized because of such problems are at greater risk for suicide (Clayton, 1983; Murphy, 1983; Motto, 1979). Especially at risk are those with MAjor Depression, Bipolar Disorder, or chronic alcoholism.

G. Culture: Suicide rates vary from one culture to another. For example, here are some suicide rates (per 100,000) for a number of different cultures (DeCatanzaro, 1981; Kidson & Jones, 1968; Wekstein, 1979):

Aborigines of Australia............0.0

Greece..................less than 9.0

United States...........12.2

Sweden..................20.8

Czechoslovakia..........22.4

Hungary.................40.7

H. Handout 9-1 includes some of these factors, as well as other factors, which distinguish between high-risk and low-risk groups.

III. Why do people commit suicide?

There are numerous attempts to explain why someone would want to kill themselves. Yet, even with these theories, it is notoriously difficult to predict who will actually attempt suicide (Murphy, 1983). We will review some of the theories that have been suggested by investigators. Each of these theories undoubtedly captures only certain aspects of this very complex phenomena. Perhaps the most fruitful approach will thus be a theory that combines variables from these various approaches.

A. Cognitions: Various theories have been developed that place cognitions (beliefs, imagery, thought processes, etc.) in a central causal role for psychological disorders (eg: Beck & Emery, 1985; Ellis & Harper, 1976). A person contemplating suicide may do so because he or she wishes to make amends for some act committed, or rid him/herself of unacceptable thoughts, or because of other thoughts or cognitions he/she has (Mintz, 1968). Freud believed suicide was a result of aggression turned inward - we're angry at mom, but that's unacceptable, so we turn it in on ourself. Excessive self-criticism and self-aggression are also often posited as the reason for suicide.

Example: The "Inner Voice" (Firestone, 1986) - Some people have a system of hostile thoughts and attitudes that constantly nag, judge, attack and punish oneself. This inner voice is part of one's "internal dialogue" (the thoughts that run through our heads all the time). For example:

"You clumsy fool! Look at what you did!" "Can't you do anything right?" "Oh no. I know I'll make a fool of myself." "I can't do that, I'm too stupid."

The "voice" operates on a continuum: We have all experienced it to some extent. When it becomes vicious and predominant, self-destruction may result (eg: substance abuse, psycho- somatic illness, suicide).

The voice is learned: Typically arising because of overly punitive and critical parents (or other important persons) during one's childhood. This attitude then became internaliz- ed. Although it is likely that innate factors such as temperament may set the stage for one's inner voice.

B. Loss: The loss of a loved person may result in great despair and hopelessness. Even Freud experienced the effects of lost love: At 29, he wrote a note to his fiancee: "I have long since resolved on a decision (suicide), the thought of which is in no ways painful, in the event of losing you" (Jones, 1963, p.85)/

C. Communication: Two-thirds of all suicide attempts are estimated to be actually attempts to communicate something to others (Carson, et al., 1988). For example: the need for love, the desire for others to feel guilty, unmet needs in general. Thus, the method used in the attempt is typically nonlethal, and it is done when and where others are likely to discover the person and intervene.

D. Current State: The intention to kill oneself is not a constant condition for the individual. It comes and goes. This suggests that the intention to commit suicide is "state-dependent": the intent only arises when the person is in an appropriate state (cognitive state, mood state...). For example, there is some evidence that indicates suicide rarely occurs in a person who is not currently going through a depressive episode (Murphy, 1983). It should be noted, however, that other evidence suggests that it is when the depressed person is beginning to feel better that suicide risk is highest (Beck, 1967).

E. Social Factors: Emile Durkheim, a famous Sociologist of the 1800's, identified three types of suicide. According to Durkheim (1951), the motivation to commit suicide is largely a social phenomenon.

Durkheim's three types of suicide:

1. Altruistic Suicide - A person who highly identifies with a social groups' morals, interests, and norms will be willing to sacrifice his/her life for the goals of the group. Or, the sacrifice may arise because the group requires a his/her death, or in other cases because he/she violated certain group norms. What ever the case, the person willingly commits suicide because of his/her high integration with the group.

Examples:

a) Jonestown mass suicide: more than 900 followers of the religious leader Jim Jones committed suicide in 1978 in an isolated commune in Guyana b) Martyrs c) Kamakazie pilots of the Japanese WWII airforce

2. Egoistic Suicide - In this case, the individual is weakly integrated into the group. Durkheim described these people as self-centered, with no emotional attachments to others or to the group. Thus, he/she loses social restraints, he/she has no sense of commitment, and so judges that suicide will not effect anyone but him/herself.

3. Anomic Suicide - Durkheim described a state of being called "anomie": a sense of normlessness that one experiences when one has no clear idea of what the expectations of the group are in terms of moral and appropriate behavior. He or she is left in a state of limbo and disorientation. Anomie occurs during times of rapid social changes, when one's relation to the group changes in sudden and unanticipated ways. The rapid social changes Durkheim discussed were 1) industrialization, 2) urbanization (the growth of and move to cities), 3) modernization. These changes are still occurring.  The world of your parents childhood was very different from yours.  How do you learn the rules?  What are the rules?  Such uncertainty may increase the risk for suicide.

I HOPE YOU ALL ARE UNDERSTOOD THIS KIND OF PSYCHOLOGY. SUISIDE IS ONE TYPE OF MOOD DISORDER..................

whipple surgery procedure
Whipple Procedure Pancreatic cancer Whipple surgery is a pancreatic cancer treatment. The Whipple procedure is the most everyday surgery carried out for pancreatic cancer and may be used to deal with other cancers such as small bowel cancer. It involves the surgical removal of the head of the pancreas, the lower end of the bile duct and the upper end of the duodenum. Pancreas Whipple surgery also involves reconnecting the stomach, pancreatic duct and the bile duct to the small intestine. Pancreatic cancer is often thought to untreatable and rapidly fatal. This may be true for some but it is essential to acknowledge that a lot people diagnosed with pancreatic cancer can be helped.

Typically, the projected time to survive pancreatic cancer for an individual who has had pancreatic cancer Whipple surgery is about thirteen to twenty months. These numbers refer to populations of people and not to individuals. Consequently, the actual life expectation for a given person can be appreciably more or less than the average.

Surgeons consider pancreatic cancer Whipple surgery or Whipple procedure (pancreatoduodenectomy) appropriate only if the pancreatic cancer is limited to a small area and if the cancer is completely removed with no leftover cells at the cut line. This procedure is used for cancers in the head or main part of the pancreas. Pancreatic cancer surgery procedure for cancers in the body or tail requires partial removal of the pancreas and spleen.

Pancreatic cancer operations are not usually performed if the cancer has spread. They are not beneficial and the procedure would only set back the time when medical treatments such as chemotherapy can be started. When pancreatic cancer Whipple surgery and other pancreatic cancer operations were first performed several decades ago, the complication and mortality rates were very high, in fact over twenty five percent. Surgery now is much safer and in the hands of an experienced surgeon mortality rates now are between two and three percent. Following pancreatic cancer operations approximately thirty percent of people will develop problems but most of these are resolved without long-term consequences.

Common complications of pancreatic cancer Whipple surgery include nausea. This is due to a delayed recovery of stomach movement. It is possible for the surgery to be adapted by the surgeons to prevent this from occurring. After pancreatic cancer operations, wound infection and leakage of pancreatic juices are common. These generally improve. Diabetes may occur or may be worsened as a result of pancreatic cancer surgery. Most people will lose ten to fifteen percent of their body weight after pancreas Whipple surgery.

It is ambiguous why this extreme weight loss occurs but it is known this is not linked to not eating for an unlimited period of time. A lot of people may never regain all of the weight, which they lose and their body weight will remain lower for several months after pancreatic cancer Whipple surgery. Taking pancreatic enzyme supplements can relieve the side effects that both the cancer and the Pancreatic cancer Whipple surgery have caused. Although surgery does not offer a true long-term cure for pancreatic cancer, it is the best available tool if the pancreatic cancer is limited to a small area.

4G INTERNET TECHNOLOGY
4G TECHNOLOGY 4G mobile technology is the name given to the next generation of mobile devices such as cell phones. It became available from at least one provider in several parts of the US in 2009. There is not yet an agreed industry standard for what constitutes 4G mobile, so for now it is merely a marketing term.

The use of G, standing for generation, in mobile technology covers the major advances of the past 20-30 years. 1G technology involved the first widely available mobile phones. 2G technology, which began in the early 1990s, switched to a digital format and introduced text messaging. 3G technology improved the efficiency of how data is carried, making it possible to carry enhanced information services such as websites in their original format. The latest iPhone is the best known example of 3G technology.

4G mobile is not yet established as an agreed set of standards, so its features are currently simply goals rather than requirements. As well as drastically increasing data transfer speeds, 4G mobile should use enhanced security measures. Another goal is to reduce blips in transmission when a device moves between areas covered by different networks. 4G mobile networks should also use a network based on the IP address system used for the internet. Within the United States, there are two major systems using 4G mobile technology. One is known as WiMax and is backed by Clearwire, a firm whose majority owner is Sprint Nextel. It began testing services in Baltimore in 2008 and was set to expand this into major new markets in 2009. Sprint intended to have 80 cities covered by the end of 2010.

The rival system, Long Term Evolution or LTE, is backed mainly by Verizon. It was expected to be ready for testing in 2010 but not available for widespread use until 2012. LTE's backers hoped to overcome this disadvantage by offering faster speeds and producing cheaper equipment.

Unlike previous generations of mobile technology, 4G mobile will be widely used for internet access on computers as well as carrying cell phone communications. Customers in areas which have strong 4G coverage will be able to use it for a home broadband connection which doesn't require any cabling to their household. It can also be used for accessing the internet on the move without having to be in a wireless hotspot such as those offered by some coffee shops, airports and libraries.

https://docs.google.com/document/pub?id=1b-RVH9x3gdMhyLn2wi8AdQpRaLmXDRbhF9LNqdnybAc

programmable logic controllers (PLC)
programmable logic controlers

HELLO TO ALL, IN THIS ARTICLE I AM TRYING TO PROVIDE ONLY BASIC INFORMATION ABOUT PLC. I AM SURE THAT U ALL KNOW THIS INFORMATION. IN THIS, NO FIGURES ARE PROVIDED. IN FUTURE, ARTICLE ON "THE MOST LATEST & ADVANCED PLC" WILL BE PUBLISHED BY ME. TILL THEN WAIT FOR IT.............

INTRODUCTION 2.1 First programmed controllers 2.2 PLC controller parts 2.3 Central Processing unit -CPU 2.4 Memory 2.5 How to program a PLC controller 2.6 Power supply 2.7 Input to a PLC controller 2.8 Input adjustable interface 2.9 Output from a PLC controller 2.10 Output adjustable interface 2.11 Extension lines

Introduction

Industry has begun to recognize the need for quality improvement and increase in productivity in the sixties and seventies. Flexibility also became a major concern (ability to change a process quickly became very important in order to satisfy consumer needs).

Try to imagine automated industrial production line in the sixties and seventies. There was always a huge electrical board for system controls, and not infrequently it covered an entire wall! Within this board there was a great number of interconnected electromechanical relays to make the whole system work. By word "connected" it was understood that electrician had to connect all relays manually using wires! An engineer would design logic for a system, and electricians would receive a schematic outline of logic that they had to implement with relays. These relay schemas often contained hundreds of relays. The plan that electrician was given was called "ladder schematic". Ladder displayed all switches, sensors, motors, valves, relays, etc. found in the system. Electrician's job was to connect them all together. One of the problems with this type of control was that it was based on mechanical relays. Mechanical instruments were usually the weakest connection in the system due to their moveable parts that could wear out. If one relay stopped working, electrician would have to examine an entire system (system would be out until a cause of the problem was found and corrected).

The other problem with this type of control was in the system's break period when a system had to be turned off, so connections could be made on the electrical board. If a firm decided to change the order of operations (make even a small change), it would turn out to be a major expense and a loss of production time until a system was functional again.

It's not hard to imagine an engineer who makes a few small errors during his project. It is also conceivable that electrician has made a few mistakes in connecting the system. Finally, you can also imagine having a few bad components. The only way to see if everything is all right is to run the system. As systems are usually not perfect with a first try, finding errors was an arduous process. You should also keep in mind that a product could not be made during these corrections and changes in connections. System had to be literally disabled before changes were to be performed. That meant that the entire production staff in that line of production was out of work until the system was fixed up again. Only when electrician was done finding errors and repairing,, the system was ready for production. Expenditures for this kind of work were too great even for well-to-do companies.

2.1 First programmable controllers

"General Motors" is among the first who recognized a need to replace the system's "wired" control board. Increased competition forced auto-makers to improve production quality and productivity. Flexibility and fast and easy change of automated lines of production became crucial! General Motors' idea was to use for system logic one of the microcomputers (these microcomputers were as far as their strength beneath today's eight-bit microcontrollers) instead of wired relays. Computer could take place of huge, expensive, inflexible wired control boards. If changes were needed in system logic or in order of operations, program in a microcomputer could be changed instead of rewiring of relays. Imagine only what elimination of the entire period needed for changes in wiring meant then. Today, such thinking is but common, then it was revolutionary!

Everything was well thought out, but then a new problem came up of how to make electricians accept and use a new device. Systems are often quite complex and require complex programming. It was out of question to ask electricians to learn and use computer language in addition to other job duties. General Motors Hidromatic Division of this big company recognized a need and wrote out project criteria for first programmable logic controller ( there were companies which sold instruments that performed industrial control, but those were simple sequential controllers û not PLC controllers as we know them today). Specifications required that a new device be based on electronic instead of mechanical parts, to have flexibility of a computer, to function in industrial environment (vibrations, heat, dust, etc.) and have a capability of being reprogrammed and used for other tasks. The last criteria was also the most important, and a new device had to be programmed easily and maintained by electricians and technicians. When the specification was done, General Motors looked for interested companies, and encouraged them to develop a device that would meet the specifications for this project.

"Gould Modicon" developed a first device which met these specifications. The key to success with a new device was that for its programming you didn't have to learn a new programming language. It was programmed so that same language ûa ladder diagram, already known to technicians was used. Electricians and technicians could very easily understand these new devices because the logic looked similar to old logic that they were used to working with. Thus they didn't have to learn a new programming language which (obviously) proved to be a good move. PLC controllers were initially called PC controllers (programmable controllers). This caused a small confusion when Personal Computers appeared. To avoid confusion, a designation PC was left to computers, and programmable controllers became programmable logic controllers. First PLC controllers were simple devices. They connected inputs such as switches, digital sensors, etc., and based on internal logic they turned output devices on or off. When they first came up, they were not quite suitable for complicated controls such as temperature, position, pressure, etc. However, throughout years, makers of PLC controllers added numerous features and improvements. Today's PLC controller can handle highly complex tasks such as position control, various regulations and other complex applications. The speed of work and easiness of programming were also improved. Also, modules for special purposes were developed, like communication modules for connecting several PLC controllers to the net. Today it is difficult to imagine a task that could not be handled by a PLC.

2.2 PLC controller components

PLC is actually an industrial microcontroller system (in more recent times we meet processors instead of microcontrollers) where you have hardware and software specifically adapted to industrial environment. Block schema with typical components which PLC consists of is found in the following picture. Special attention needs to be given to input and output, because in these blocks you find protection needed in isolating a CPU blocks from damaging influences that industrial environment can bring to a CPU via input lines. Program unit is usually a computer used for writing a program (often in ladder diagram).

2.3 Central Processing Unit - CPU

Central Processing Unit (CPU) is the brain of a PLC controller. CPU itself is usually one of the microcontrollers. Aforetime these were 8-bit microcontrollers such as 8051, and now these are 16- and 32-bit microcontrollers. Unspoken rule is that you'll find mostly Hitachi and Fujicu microcontrollers in PLC controllers by Japanese makers, Siemens in European controllers, and Motorola microcontrollers in American ones. CPU also takes care of communication, interconnectedness among other parts of PLC controller, program execution, memory operation, overseeing input and setting up of an output. PLC controllers have complex routines for memory checkup in order to ensure that PLC memory was not damaged (memory checkup is done for safety reasons). Generally speaking, CPU unit makes a great number of check-ups of the PLC controller itself so eventual errors would be discovered early. You can simply look at any PLC controller and see that there are several indicators in the form of light diodes for error signalization.

2.4 Memory

System memory (today mostly implemented in FLASH technology) is used by a PLC for an process control system. Aside from this operating system it also contains a user program translated from a ladder diagram to a binary form. FLASH memory contents can be changed only in case where user program is being changed. PLC controllers were used earlier instead of FLASH memory and have had EPROM memory instead of FLASH memory which had to be erased with UV lamp and programmed on programmers. With the use of FLASH technology this process was greatly shortened. Reprogramming a program memory is done through a serial cable in a program for application development.

User memory is divided into blocks having special functions. Some parts of a memory are used for storing input and output status. The real status of an input is stored either as "1" or as "0" in a specific memory bit. Each input or output has one corresponding bit in memory. Other parts of memory are used to store variable contents for variables used in user program. For example, timer value, or counter value would be stored in this part of the memory.

2.5 Programming a PLC controller

PLC controller can be reprogrammed through a computer (usual way), but also through manual programmers (consoles). This practically means that each PLC controller can programmed through a computer if you have the software needed for programming. Today's transmission computers are ideal for reprogramming a PLC controller in factory itself. This is of great importance to industry. Once the system is corrected, it is also important to read the right program into a PLC again. It is also good to check from time to time whether program in a PLC has not changed. This helps to avoid hazardous situations in factory rooms (some automakers have established communication networks which regularly check programs in PLC controllers to ensure execution only of good programs).

Almost every program for programming a PLC controller possesses various useful options such as: forced switching on and off of the system inputs/ouputs (I/O lines), program follow up in real time as well as documenting a diagram. This documenting is necessary to understand and define failures and malfunctions. Programmer can add remarks, names of input or output devices, and comments that can be useful when finding errors, or with system maintenance. Adding comments and remarks enables any technician (and not just a person who developed the system) to understand a ladder diagram right away. Comments and remarks can even quote precisely part numbers if replacements would be needed. This would speed up a repair of any problems that come up due to bad parts. The old way was such that a person who developed a system had protection on the program, so nobody aside from this person could understand how it was done. Correctly documented ladder diagram allows any technician to understand thoroughly how system functions.

2.6. Power supply

Electrical supply is used in bringing electrical energy to central processing unit. Most PLC controllers work either at 24 VDC or 220 VAC. On some PLC controllers you'll find electrical supply as a separate module. Those are usually bigger PLC controllers, while small and medium series already contain the supply module. User has to determine how much current to take from I/O module to ensure that electrical supply provides appropriate amount of current. Different types of modules use different amounts of electrical current.

This electrical supply is usually not used to start external inputs or outputs. User has to provide separate supplies in starting PLC controller inputs or outputs because then you can ensure so called "pure" supply for the PLC controller. With pure supply we mean supply where industrial environment can not affect it damagingly. Some of the smaller PLC controllers supply their inputs with voltage from a small supply source already incorporated into a PLC.

2.7 PLC controller inputs

Intelligence of an automated system depends largely on the ability of a PLC controller to read signals from different types of sensors and input devices. Keys, keyboards and by functional switches are a basis for man versus machine relationship. On the other hand, in order to detect a working piece, view a mechanism in motion, check pressure or fluid level you need specific automatic devices such as proximity sensors, marginal switches, photoelectric sensors, level sensors, etc. Thus, input signals can be logical (on/off) or analogue. Smaller PLC controllers usually have only digital input lines while larger also accept analogue inputs through special units attached to PLC controller. One of the most frequent analogue signals are a current signal of 4 to 20 mA and milivolt voltage signal generated by various sensors. Sensors are usually used as inputs for PLCs. You can obtain sensors for different purposes. They can sense presence of some parts, measure temperature, pressure, or some other physical dimension, etc. (ex. inductive sensors can register metal objects).

Other devices also can serve as inputs to PLC controller. Intelligent devices such as robots, video systems, etc. often are capable of sending signals to PLC controller input modules (robot, for instance, can send a signal to PLC controller input as information when it has finished moving an object from one place to the other.)

2.8 Input adjustment interface

Adjustment interface also called an interface is placed between input lines and a CPU unit. The purpose of adjustment interface to protect a CPU from disproportionate signals from an outside world. Input adjustment module turns a level of real logic to a level that suits CPU unit (ex. input from a sensor which works on 24 VDC must be converted to a signal of 5 VDC in order for a CPU to be able to process it). This is typically done through opto-isolation, and this function you can view in the following picture. Opto-isolation means that there is no electrical connection between external world and CPU unit. They are "optically" separated, or in other words, signal is transmitted through light. The way this works is simple. External device brings a signal which turns LED on, whose light in turn incites photo transistor which in turn starts conducting, and a CPU sees this as logic zero (supply between collector and transmitter falls under 1V). When input signal stops LED diode turns off, transistor stops conducting, collector voltage increases, and CPU receives logic 1 as information.

2.9 PLC controller output

Automated system is incomplete if it is not connected with some output devices. Some of the most frequently used devices are motors, solenoids, relays, indicators, sound signalization and similar. By starting a motor, or a relay, PLC can manage or control a simple system such as system for sorting products all the way up to complex systems such as service system for positioning head of CNC machine. Output can be of analogue or digital type. Digital output signal works as a switch; it connects and disconnects line. Analogue output is used to generate the analogue signal (ex. motor whose speed is controlled by a voltage that corresponds to a desired speed).

2.10 Output adjustment interface

Output interface is similar to input interface. CPU brings a signal to LED diode and turns it on. Light incites a photo transistor which begins to conduct electricity, and thus the voltage between collector and emmiter falls to 0.7V, and a device attached to this output sees this as a logic zero. Inversely it means that a signal at the output exists and is interpreted as logic one. Photo transistor is not directly connected to a PLC controller output. Between photo transistor and an output usually there is a relay or a stronger transistor capable of interrupting stronger signals.

2.11 Extension lines

Every PLC controller has a limited number of input/output lines. If needed this number can be increased through certain additional modules by system extension through extension lines. Each module can contain extension both of input and output lines. Also, extension modules can have inputs and outputs of a different nature from those on the PLC controller (ex. in case relay outputs are on a controller, transistor outputs can be on an extension module).

"SUPERCOMPUTING"
Article No. :- 1 Date:- 28-9-2011 Time:- 1:00 PM Subject:- “ SUPERCOMPUTING”

Hello to all, Welcome to the “WORLD OF KNOWLEDGE”…………………………… With the help of this article , I am trying to share a knowledge on above mentioned subject.No figures & images provided in this article. If you are interested in figures & images about this article, you can send e-mail or sms to me. I always welcome your questions. my e-mail id & cell no. is given bellow.please send your questions about this article via e-mail or sms only. I will try to contact you as soon as possible. E-mail ID:- ravitrivedi89@live.com Cell no. :- +919722746461

What is Supercomputer? supercomputer, a state-of-the-art, extremely powerful computer capable of manipulating massive amounts of data in a relatively short time. Supercomputers are very expensive and are employed for specialized scientific and engineering applications that must handle very large databases or do a great amount of computation, among them meteorology, animated graphics, fluid dynamic calculations, nuclear energy research and weapon simulation, and petroleum exploration. There are two approaches to the design of supercomputers. One, called massively parallel processing (MPP), is to chain together thousands of commercially available microprocessors utilizing parallel processing techniques. A variant of this, called a Beowulf cluster, or cluster computing, employs large numbers of personal computers interconnected by a local area network and running programs written for parallel processing. The other approach, called vector processing, is to develop specialized hardware to solve complex calculations. This technique was employed (2002) in the Earth Simulator, a Japanese supercomputer with 640 nodes composed of 5104 specialized processors to execute 35.6 trillion mathematical operations per second; it is used to analyze earthquake and weather patterns and climate change, including global warming. Currently the fastest supercomputer is the Japanese K Computer, at the RIKEN Advanced Institute for Computational Science, Kobe, which can perform more than 8 quadrillion calculations per second. It uses 68,544 eight-core processors. The fastest American supercomputer is the Cray Jaguar, at Oak Ridge National Laboratory; it utilizes 37,376 six-core and 7,832 quad-core processors to execute as many 2.33 quadrillion mathematical operations per second. Many high-performance computers use water and refrigeration for cooling, but some are air-cooled and use no more power than the average home. In 2003 scientists at Virginia Tech assembled a relatively low-cost supercomputer using 1,100 dual-processor Apple Macintoshes; it was ranked at the time as the third fastest machine in the world.

 Tianhe-1 Tianhe-1, meaning Milky Way, achieved a computing speed of 2,570 trillion calculations per second, earning it the number one spot in the Top 500 survey of supercomputers. The Jaguar computer at a US government facility in Tennessee, which had held the top spot, was ranked second with a speed of 1,750 trillion calculations per second. Tianhe-1 does its warp-speed "thinking" at the National Centre for Supercomputing in the northern port city of Tianjin -- using mostly chips designed by US companies. Another Chinese system, the Nebulae machine at the National Supercomputing Centre in the southern city of Shenzhen, came in third.

super computer is the latest & fastest computer developed by the scientists. they are invented in 1960s. for details of more type of supercomputer of different countries, please wait for my upcoming article. At the end of this article, there are only few words to say, "PLEASE READ ONLY ONE BOOK ON DIFFERENT SUBJECT PER DAY".......

More Reference on this article you can read following books……………. 1)	TITLE:- Promoting high performance computing and communications        AUTHOR:-  United States Congressional Budget Office

4D SONOGRAPHY
Article No. :- 2

Date:- 13-10-2011 Time:- 1:00 PM Subject:- 4 DIMENSIONAL SONOGRAPHY

Hello to all, Welcome to the “WORLD OF KNOWLEDGE”…………………………… With the help of this article , I am trying to share a knowledge on above mentioned subject. No figures & images provided in this article. If you are interested in figures & images about this article, you can send e-mail or sms to me. I always welcome your questions. my e-mail id & cell no. is given bellow.please send your questions about this article via e-mail or sms only. I will try to contact you as soon as possible. E-mail ID:- ravi.trivedi.08.08.89@gmail.com Cell no. :- +919722746461

DIFFERENCE BETWEEN 2D,3D & 4D

Keep in mind 2D, 3D and 4D are features/technology on the ultrasound system that produce images. 2D technology is the black/white technology and is used for gender determination and to listen to the baby's heartbeat. 3D produces color still shot pictures. 4D, in simple terms, is 3D in movement. So if your baby is sucking his/her thumb, crying, yawning, etc. you will see his/her movements and gestures in the color technology. In order to capture all of this 4D movement, you will need to make sure you have chosen a video option.

4 DIMENSION SONOGRAPHY A 4D ultrasound uses a special sonogram machine and takes images from a few different angles, which reveal more detailed images of the fetus, such as facial features. It can also capture movements made by the baby during the procedure. Advances in computer technology are responsible for the higher quality images in 4D ultrasonography. Images are generated by sending high-frequency sound waves inside the mother's body. The waves penetrate all fluids and bounce away from solids. The rebounding waves produce images that are processed quickly, making the images appear to occur in real time.

Occasionally, a two-dimensional ultrasound may indicate a problem and a more sophisticated 4D ultrasound is needed to confirm an abnormality. For many pregnant women, there is no medical need for a 4D ultrasound, however the mom-to-be wants to get a better look at her baby. Most centers that perform a 4D ultrasound, recommend the test is performed after twenty-five weeks gestation. The baby is usually big enough to see features clearly.

After a conductive gel is applied to the abdomen, the procedure is done by gliding a transducer over the pregnant women’s abdomen. A monitor is within view, allowing the woman to view the fetus and watch as the ultrasound is completed. The mom is given a keepsake photo and video of the baby.

The 4D ultrasound takes these kinds of sound say pictures at a very fast tempo allowing them to come back in real time, along with creating a form of animation display for the parents. They will actually be able to see their youngster moving in the actual womb. Utilizing a display screen, a great electronics console, and a portable transducer the sonography device gives off high consistency sound dunes and when the waves recover they create a great echo that will shows up on the screen as a photograph. There is no rays involved in an sonography scan this process so it is healthful for the newborn. The 4D sonography allows medical doctors to conduct examinations upon unborn babies which were not possible prior to, because they can easily see the entire picture and captures the movement. Ultrasounds are used simply by doctors to view how the newborn is creating, determining what age it is, recognizing twins, recognizing abnormalities, identifying the sex, and the bodyweight of the baby. Individuals were thrilled with the introduction from the 3D sonography which allowed them to see an incredibly clear picture of the baby inside womb. The 4D ultrasound tests are even more amazing for you to future mother and father because they is able to see the activity of the baby within the tummy.

SAFETY FEATURES OF 4D SONOGRAPHY

Most medical experts agree that 4D ultrasounds are safe for use during pregnancy. These ultrasounds allow parents and medical staff members to capture images of a developing baby in 3D and witness movements at the same time, which is why it's called 4D. Parents and medical personnel can view the baby and its movements in real time on a computer monitor. Some people have expressed concern over the fact that the sound waves used in a 4D ultrasound can raise tissue temperatures. Many medical experts assert, however, that the potential temperature change is not significant enough to cause harm. Unlike x-rays, however, 4D ultrasounds do not involve the use of radiation. This means there is no risk of radiation-related cancer or tissue damage from 4D ultrasound.

Due to the fact that 4D machines create a higher quality image, there is some concern by physicians that the energy level used in 4D ultrasounds may be higher, although this has not been proven. For women who do want a keepsake photo and opt for a 4D ultrasound, it’s important to be sure the technician performing the test is a licensed sonographer. Women considering a 4D ultrasound should talk to their doctor regarding any safety concerns prior to the procedure.

At the end of this article, there are only few words to say, "PLEASE READ ONLY ONE BOOK ON DIFFERENT SUBJECT PER DAY".......

Reference books for more information on this article............

“ATLAS OF 3 & 4 DIMENSIONAL SONOGRAPHY IN OBSTRETRICS AND GYNECOLOGY” BY :-  DAVID  JACKSON  &  ASIM  KURJAK