Talk:Data center/Archive 1

History
I am doing some research on the subject of Data Centres and the quite simple question and one that will cause quite a bit of debate, relates to it's history, when did the words Data Centres first come into common use.

I remember visiting the Manchester University 'Main Computer Suite' in 1984 and it was in effect what we refer to today as a Data Centre with a few minor differences. (mainframes and no switched network)

So did it start with the implementation of Rack Mount Servers and a structured cabling infrastructure? which would mean no earlier than 1987 if it was the latter.

When did Rack Mount servers start being used? The earliest I can remember is the late 1990s.

Or am I barking up the wrong tree completely and is it all based around the internet and when we all started to use the world wide web in anger.

Any feedback would be helpful.

Caveman107 14:08, 11 June 2007 (UTC) Caveman107


 * &gt; When did Rack Mount servers start being used? The earliest I can remember is the late 1990s.
 * Please define "server" as many early computers in the 1970s were rack mounted. Nearly all S-100 bus machines produced from the mid 1970s to mid-1980s (when S-100 more or less died) were rack mount and they were used as general purpose servers too.  As S-100 died out they were replaced by rack mount machines containing IBM PC/AT compatible motherboards.


 * As for the history of "data center" - I don't know that as well but iirc, there have been companies renting rack space since the 1970s and possibly into the 1960s. Internal corporate "computer rooms" have been called "data centers" too but we'd need to dig through the old computer books from the 1960s, 70s, etc. to see when the term was used for the computer room and when it morphed into a term often applied to colo and similar rented space. Marc Kupper (talk) (contribs) 00:56, 2 September 2007 (UTC)

My father was a Univac technician and my father-in-law was an IBMer from 1960 thru 1985. The term "data center" is actually a shortened form of "data processing center" which referred to a typical mainframe environment with the following characteristics; 1) raised access floor for cabling (not originally used for A/C); 2) a mainframe and associated tape drives; and 3)a precision environmental control system to keep temperature and humidity within tight tolerances. Tim Dueck 18:41 PST, 21 May 2008

Additional suggestions for the article
Whould anyone be interested in my adding to new sections to the Data Center entry for Wikipedia:

1. The data centre market - demand and supply dynamics 2. Data centre power - taking holistic approach to data centre power reduction

This is my first ever contribution to Wikipedia so please be gentle! Adamfawsitt (talk) 10:41, 25 June 2008 (UTC)

External link removed
Hello, I've tried to add an external link to a new site for data center professionals (www.datacenterprofessionals.net) but this has been removed. What am I doing wrong (I'm also new to Wikipedia)? Its a valid link and would be useful for anyone involved in data centers. Thanks, Ken--DataCenterProfessionals (talk) 11:24, 8 January 2009 (UTC)

Solution providers review
Hi,

I'm searching for a rank or customer review of top Data center solutions providers. —Preceding unsigned comment added by 213.236.42.20 (talk) 20:48, 11 August 2009 (UTC)

"Tier 4 data center ... security zones controlled by biometric ..."
There is no mention in the TIA-942 PDF of biometric devices. —Preceding unsigned comment added by 68.183.223.232 (talk) 22:20, 10 June 2008 (UTC)

The link purportedly to TIA-942:Data Center Standards Overview now seems to lead to a commercial site. --Danensis (talk) 13:14, 14 August 2009 (UTC)

Halon
The halon fire system does not push the oxygen out of the room. That would kill anyone left in the room, which halon does not do. It chemically interrupts the fire chain reaction, extinguishing the fire. --Unsigned comment by 67.180.246.232

I've edited the article to clarify the effects of Halon. See Fire Extinguisher and Haloalkanes for lots more details--Mcpusc 03:36, 26 January 2006 (UTC)

New trends in Gas Fire Suppression

The most recent demonstration of N2 being used as the fire suppression gas was very impressive. Nohmi the well known Fire suppressions Engineers held a series of demonstrations with N2 in August 2009 in Saitama near Tokyo. Guests were allowed to remain in the test zones when N2 was released.

The result was impressive, with the fire extinguished immediately and none of us feeling any ill effects.

Other gases being deployed are Inergen, a blend of Argon, CO2 and Nitrogen. Inergen is probably a tradename.

CO2 is also being used in limited applications.

The move away from Halon and FM type gases is more pronounced with the Green awareness. CFCs are in general not being used in Japan. (Ozlanka - Tokyo Japan --Ozlanka (talk) 04:45, 14 September 2009 (UTC)) —Preceding unsigned comment added by Ozlanka (talk • contribs) 04:33, 14 September 2009 (UTC)

Uptime Tier Classification As Applied To None Facility Related Areas
The UI Tier levels have not been expanded to cover critical factors that could impact on the practical aspect of a DC.

Factors such as:

Access to the DC in the event of a natural calamity. How many road routes, how many bridges, rail routes, etc. to the DC. Alternate routes are critical to provide access.

Flood Plain: The probability of flooding of the DC is an important factor that should be considered. Any due dilligence would investigate this factor.

The Building per se: Type of construction, Seismic isolation to what level, etc.

Air Quality : In DCs that use Air Side Free cooling, the Air Quality becomes critical and can impact on server life.

PUE: The efficiency of the DC should be a part of the UI classifications. With Carbon emission reductions being highlighted at all levels in the community, an efficient DC should be rated higher. A PUE of 1.5 or under should become one of the qualifications for a Tier 4 DC rating. --Ozlanka (talk) 05:05, 14 September 2009 (UTC)

It is nearly impossible to note all the factors that the TIA will audit to certify the datacenter with a TIER-classification. Thousands of items. In great lines; Tier 1 -> Stand-alone, non-fault tolerant Tier 2 -> Stand-alone, maintenance-tolerant Tier 3 -> Full redundancy, except when in maintenance Tier 4 -> Full redundancy, including while in maintenance.

As you say (i'd like to highlight this); Tier Classification has nothing to do with PUE's (Tier 4 has an extremely high PUE). However, it should be taken in consideration, to make the classification more future-proof and possibly green.

Image
The image is incorrectly licensed. It's from Akamai NOCC Tour video. The author is not Gsmith1of2, so he can't just release it to public domain. Sorry if wrong, but worth checking. —Preceding unsigned comment added by Malikussaid (talk • contribs) 10:02, 28 March 2010 (UTC)

Merge from Server farm
The articles for data center and server farm describe the same thing, and in fact they link to each other as equivalents. These articles should be merged into Data center, unless there is a compelling reason to keep Server farm as a different article. Henry Merriam (talk) 21:33, 21 December 2009 (UTC)

Agree. These two articles are too similar to keep them apart Floul1 (talk) 09:26, 12 February 2010 (UTC)

Somehow agree the two can be mnerged into data centres, but I think we will be missing the option to add Virtual Server farms into the server farms article. So, I'm guessing the current server farm article can be merged into Data centre, but there should be one more on virtual server farms.

I disagree strongly that these should be merged. Within the industry these two terms mean something very different. A server farm is a collection or set of servers performing the same function whereas a data centre is a facility for hosting the servers in a secure environment. It would make more sense to bring this point out in server farm. Tudorjames (talk) 10:19, 23 March 2010 (UTC)

Tudorjames is correct. A data center is simply not the same thing as a server farm. A data center may contain one or more server farms, or it may contain a collection of servers that perform individual functions, therefore not configured as a server farm. The sentence also called a data center[1] in the first paragraph of server farm is incorrect and should be deleted. For the same reason, the sentence also called a server farm[1] in the first paragraph of data center should be deleted. I have been a building and managing Data Centers for over 15 years. Arnoldpieper 15:15, 23 March 2010 (UTC)

The third paragraph in server farm is correct and supports what is being said: The computers, routers, power supplies, and related electronics are typically mounted on 19-inch racks in a server room or data center. In other words, a server farm must be housed or hosted by a data center. It seems to me that whoever inserted the sentences also called a..., has caused this confusion. Otherwise the contents of both terms is for the most part correct and define different things. Arnoldpieper 15:25, 23 March 2010 (UTC)

This is the record of the offending entry on Data Center (the incorrect insertion of also called a server farm): 12:49, 12 May 2009 68.0.124.33 (talk) (15,974 bytes) (yet another synonym) (undo) And here is the record of the offending entry on Server Farm (incorrect insertion of also called a data center): 12:41, 12 May 2009 68.0.124.33 (talk) (4,236 bytes) (add references, as requested.) (undo) These two sentences should be deleted from both terms, stopping the confusion. Arnoldpieper 15:47, 23 March 2010 (UTC)

Highly Disagree. It is true that the two articles are similar. However, the Data center focuses on more than the server farm. Server farms can be used in places other than Data Centers, for instance many schools and work offices would have server farms for use by students/staff so that computer data can be accessed from computers outside the room. I don't think that server farms and data centers are identical. Related, yes. Same, no. I will highly contest the merge of these two articles. - Riotrocket8676 operating from an outside computer. --69.203.108.118 (talk) 04:11, 22 June 2010 (UTC)

I disagree strongly that these should be merged. A datacenter can even contain no serverfarms or servers: it's the facility and not the contents. Maybe the articles should be edited as the terms might be used incorrectly (I haven't checked that for the En wiki though: but it is a mistake that's often made - also in publications from within the industry). Microsoft recently (well, last year) opened a new datacentre in Dublin to house, among other things,several server-farms: infrastructure for MSN, infrastructure for cloud-computing/Windows Azure and also storage infrastructure for Microsoft's own support-case handling system MSSolve. And then the datacentre also contains infrastructure for the networking side; thus the switches and routers to connect the serverfarms to MS's other datacentres, the MS internet-backbone and MS corporate backbone. Thus to replace/merge serverfarm with Datacenter wouldn't be OK at all. JanT (talk) 23:37, 22 June 2010 (UTC) JanT (talk) 23:37, 22 June 2010 (UTC)

'''Stale, no further consensus. Therefore, the status quo shall remain. Without prejudice.''' -- Riotrocket8676  You gotta problem with that? 01:52, 2 September 2010 (UTC)

Uptime Institute vs. TIA-942 Classifications
The classifications provided by the Uptime Institute (Tier I - IV) are not identical to the TIA-942 classifications (Tier 1 - 4). http://professionalservices.uptimeinstitute.com/myths.htm Oracleofbargth (talk) 17:23, 14 June 2011 (UTC)

hot/cold air section appears to be cut and paste
the bit on hot and cold aisles could be good, but right now it is a cut and paste job from an external site.

Either the section should be reverted for copyright reasons, or the author has to say that they work for the company and releases the text, in which case it will need editing for CoI issues, and to sound less like and advertisment. SteveLoughran (talk) 19:40, 14 June 2011 (UTC)

Add to the Requirements for modern data centers section
I would like to add the following to the "Requirements for modern data centers" section. This sub-section discusses data center modernization.

There is a trend to modernize data centers in order to take advantage of the performance and energy efficiency increases of newer IT equipment and capabilities, such as cloud computing. This process is also known as data center transformation.[1]

Organizations are experiencing rapid IT growth but their data centers are aging. Industry research company IDC puts the average age of a data center at nine-years-old.[1] Gartner, another research company says data centers older than seven years are obsolete.[2] In May 2011, data center research organization Uptime Institute, reported that 36 percent of the large companies it surveyed expect to exhaust IT capacity within the next 18 months.[3]

Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach.[4] The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.


 * Standardization/consolidation: The purpose of this project is to reduce the number of data centers a large organization may have. This project also helps to reduce the number of hardware, software platforms, tools and processes within a data center. Organizations replace aging data center equipment with newer ones that provide increased capacity and performance. Computing, networking and management platforms are standardized so they are easier to manage.[5]
 * Virtualize: There is a trend to use IT virtualization technologies to replace or consolidate multiple data center equipment, such as servers. virtualization helps to lower capital and operational expenses,[6] and reduce energy consumption.[7] Data released by investment bank Lazard Capital Markets reports that 48 percent of enterprise operations will be virtualized by 2012.[8] Gartner views virtualization as a catalyst for modernization.[9]
 * Automating: Data center automation involves automating tasks such as provisioning, configuration, patching, release management and compliance. As IT administration and staff time is at a premium, automating tasks make data centers run more efficiently.[9]
 * Securing: In modern data centers, the security of data on virtual systems is integrated with existing security of physical infrastructures.[9] The security of a modern data center must take into account physical security, network security, and data and user security.

[1] Mukhar, Nicholas. "HP Updates Data Center Transformation Solutions," August 17, 2011 http://www.mspmentor.net/2011/08/17/hp-updates-data-transformation-solutions/

[2] Sperling, Ed. "Next-Generation Data Centers," Forbes, March 15. 2010 http://www.forbes.com/2010/03/12/cloud-computing-ibm-technology-cio-network-data-centers.html

[3] Niccolai, James. "Data Centers Turn to Outsourcing to Meet Capacity Needs," CIO.com, May 10, 2011 http://www.cio.com/article/681897/Data_Centers_Turn_to_Outsourcing_to_Meet_Capacity_Needs

[4] Tang, Helen. "Three Signs it's time to transform your data center," August 3, 2010, Data Center Knowledge http://www.datacenterknowledge.com/archives/2010/08/03/three-signs-it%E2%80%99s-time-to-transform-your-data-center/

[5] Miller, Rich. "Complexity: Growing Data Center Challenge," Data Center Knowledge, May 16, 2007 http://www.datacenterknowledge.com/archives/2007/05/16/complexity-growing-data-center-challenge/

[6] Sims, David. "Carousel's Expert Walks Through Major Benefits of Virtualization," TMC Net, July 6, 2010 http://virtualization.tmcnet.com/topics/virtualization/articles/193652-carousels-expert-walks-through-major-benefits-virtualization.htm

[7] Delahunty, Stephen. "The New urgency for Server Virtualization," InformationWeek, August 15, 2011. http://www.informationweek.com/news/government/enterprise-architecture/231300585

[8] Higginbotham, Stacey. "When It Comes to Virtualization, Are We There Yet?," GigaOM http://gigaom.com/2010/04/19/when-it-comes-to-virtualization-are-we-there-yet/

[7] Forgione, Joe. "Five Top Data Center Protection Challenges and Best Practices for Overcoming Them," ITBusinessEdge, July 25, 2011 http://www.ctoedge.com/content/five-top-data-center-protection-challenges-and-best-practices-overcoming-them

[8] Miller, Rich. "Gartner: Virtualization Disrupts Server Vendors," Data Center Knowledge, December 2, 2008 http://www.datacenterknowledge.com/archives/2008/12/02/gartner-virtualization-disrupts-server-vendors/

[9] Ritter, Ted. Nemertes Research, "Securing the Data-Center Transformation Aligning Security and Data-Center Dynamics," http://lippisreport.com/2011/05/securing-the-data-center-transformation-aligning-security-and-data-center-dynamics/

Sfiteditor (talk) 21:56, 26 August 2011 (UTC)

I went ahead and posted this text in the main page. Sfiteditor (talk) 23:35, 9 September 2011 (UTC)

Merge with Modular data center
The articles Modular data center can came here and be added to Data center. These articles should be merged into Data center, because this will make this article more as a complete, perfect article. That will help this article. بازرس (talk) 15:18, 6 May 2012 (UTC)
 * Support Merge - Modular data center could stand to be pared down some anyways; there are far too many external links. VQuakr (talk) 03:27, 7 September 2012 (UTC)

Data Center vs. Server Farm
Data centers make very bad server farms and no server farm could be certified as a data center. Only in the weird world of Wikipedia where the people writing the articles know nothing about the industry would such confusion exist.

Data storage (Backup bust be deep vaulted, secure and with very limited access as remote as possible. Server farms must be on major communications intersections MAE East, AADS, Palo Alto etc... All major telecom hubs are in major cities and true vaulted data centers must never be located where rioting or civil unrest could interrupt service.

Please stop making this Wikistupidia if you do not know the subject matter do not post. Thanks. Scottprovost (talk) 20:09, 3 February 2010 (UTC)


 * Okay then, to prevent this from becoming "wikistupidia", could someone who knows this topic please edit in a statement about how a Data Center and Server Farm clearly differ inside the actual article before I do? Airelor (talk) 19:56, 23 September 2012 (UTC)

Storage?
Came here to find out the actual medium being used to store data in these data centers: nothing in article about this basic fact! They're called DATA centers, therefore the number one thing they do is handle DATA, and part of that requires STORAGE of DATA. Yet nothing in the article explains what companies are using to store data on (2.5", 3.5",...?? 3TB, 4TB drives...?? SATA, SAS,...??). I'm left none the wiser after looking on this page. Jimthing (talk) 07:42, 22 January 2013 (UTC)


 * @Jimthing. What's your point/question: what do you expect to find? That there is only one way that you can store data on some storage array in a datacenter? Nope: anything is possible - but very often (large) SAN arrays will be used that are shared between many servers via iSCSI or Fibre Channel. So best to look for SAN or Storage area network and NAS: Network Attached Storage. And these arrays can use a mix of media: disk-drives in some RAID config, maybe in combination with Solid State Disk for faster throughputs...
 * But also you will find servers with it's own local (SATA/SAS) disks: eg for booting the OS. Or there is no HDD in the local system as it boots from SAN or it has the basic OS on an SD card (for example you can get blade servers that have SD memory card on which VMware ESXi is installed and all data (the virtual disks for the virtual machines running on the ESX node) comes from a SAN....One final comment: a datacenter doesn't have to say that you store data - you mainly process data in a datacenter: it is not a word for a "data library" but a location where you process data. And then it is handy to store it somewhere. Tonkie (talk) 05:35, 23 January 2013 (UTC)

Incorrect ASHRAE specifications listed in Wiki entry
The Wiki states that "ASHRAE's "Thermal Guidelines for Data Processing Environments"[3] recommends a temperature range of 20–25 °C (68–75 °F) and humidity range of 40–55% with a maximum dew point of 17°C as optimal for data center conditions.[4]"

But if you follow the link, the range of 20C-25C was the recommened range in 2004. The 2008 recommended range is 18C-25C, with recommended dew-point between 5.5C and either 15C or 60% relative humidity (whether ASHRAE meant the lesser of the two, or the greater of the two, is unclear). NextHopSelf (talk) 00:14, 19 January 2009 (UTC)

The page references obsolete ASHRAE Thermal Guidelines. The guidelines stated are from the 2004 version (68 - 77deg F and 40 -55%Rh). The current version is dated 2011 and the recommended ranges are now 64.4 - 80.6 degF and 41.9 degF dewpoint - 59 degF dewpoint capped at 60% Rh.

Thanks, Terry Rodgers, CPE, CPMP and ASHRAE TC9.9 member.173.188.175.76 (talk) 02:26, 2 August 2013 (UTC)

The guidelines have been updated to the 2011 values & the reference was updated to the 2012 official ASHRAE publication (the 2011 white paper that originally published them has been removed from the site). Ahadenfeldt (talk) 22:22, 2 December 2013 (UTC)

a copy-paste?
This 2008 article from techtarget was not mentioned as source, but could be a copy-paste in the Uptime Institute tier levels. The wikipedia article was created much later. --K0zka (talk) 12:37, 15 January 2014 (UTC)

Spelling of Data Center
Spelling of "data center" is not consistent in the article (or indeed on this talk page) - the term "datacenter" is often used, although I cannot find the single word without a space in either Oxford or Cambridge online dictionaries and Merriam-Webster does not define it at all. --BanzaiSi (talk) 17:21, 10 June 2014 (UTC)

Feel free to correct these errors. I quite often use "datacenter" but I think I am incorrect. Robert.Harker (talk) 18:52, 10 June 2014 (UTC)

Energy efficiency: PUE
In the "Energy efficiency" section, the statement "The average data center in the US has a PUE of 2.0, meaning that the facility uses two watts of overhead power for every watt delivered to IT equipment" is not correct. A PUE of 2.0 means that the facility uses two watts of TOTAL power for every watt delivered to IT equipment. Total power is overhead power plus IT equipment power. Thus a facility with two watts of OVERHEAD power for every watt of IT equipment power would have a PUE of 3.0, not 2.0. The text has been corrected. Piperh (talk) 18:42, 19 January 2015 (UTC)

Merger proposal
The server room article appears to duplicate much of the content in the data center article in a slightly stubbier form. There is a stale discussion on merging server farm into data center from 2010, and while I agree that there is a notable different between the use of "server farm" and "data center" in the industry, "server room" seems to be much poorly defined. At any rate, I don't think all three of these articles should exist separately, and we should do some cleanup to make sure Data center and Server farm contain their pertinent information with not too much overlap. Jenrzzz (talk) 01:57, 7 September 2012 (UTC)
 * Seems a matter of scale that could be handled in one article. User:Fred Bauder Talk 12:01, 25 September 2012 (UTC)

A server room can exist in any building or business and that room owner can (usually) choose whether or not to ad-hear to the standards of a data center. Server farms and data centers can both be used by a single business or entity or multiple business' or entities and usually exist for that exclusive purpose. A server room is normally a room in a building that has been purposed for housing the servers of (usually and more likely) only one business or entity where the owner decides how much continuity is required or can be afforded unless otherwise required. I recommend making the distinction that a server room only serve one entity and merging the server farm and data center articles noting the distinctions between the two. Once a server room houses servers for more than one entity and spills over into standards regulation then it graduates to becoming a data center (or server farm). — Preceding unsigned comment added by Palmplant (talk • contribs) 15:52, 16 October 2012 (UTC)


 * Data center and Server farm are more conceptually similar than are Data center and Server room. Typically a data center serves multiple businesses or organizations. A server farm is typically owned and used by a single organization but physically could be considered and mistaken with a data center. A server room is typically a small cluster of servers that serve a single organization on-location. 67.252.103.23 (talk) 19:27, 29 July 2014 (UTC)
 * Oppose. Data Center is the physical space, with cooling, power, network connectivity. Server farm is just a "bunch of servers" in same location or even across multiple locations. Server room is just that. A room with servers and other equipment inside. A DC can have a single server room or several tens of them, it depends. From layman's perspective one can thing of Data Center as the "Garage House", Server farm as the "car fleet" (which can be inside as single garrage, but also spread across several garages in several garage houses) and server room as the individual garage (which can also be the sole garage in a "garage house" on the extreme range). All quite distinct concepts. 195.212.29.186 (talk) — Preceding undated comment added 15:12, 25 May 2015 (UTC)

Grades/Tiers
I've left a link to http://www.donelan.com/design/general.html which describes the different "grades" of datacentres. It's practically the only reference on subject I've found.. despite everyone boasting they're "class A Datacenter". So the question is: is this simply another marketing buzz word, or does it actually mean something? I'm hoping the article could be updated to mention this. (I will try and get to it eventually, but in case I don't, I'm leaving these notes.) --geoff_o 15:48, 23 March 2006 (UTC)

The Donelan link seems to not work anymore, shall we remove it? JonnyRo

The Uptime Institute has a rating system for datacenters (tier 1 through 4). Ben 23:42, 9 June 2006 (UTC)

- Temperatures:  Temperatures within a DC will vary, especially if you are using a hot-aisle cold-aisle hot-aisle rack configuration, but as as general rule 22 C +/- 1 degree is the typical target temperature range we look for and most vendors will offer. Not 17 degrees. Servers don't need to actually be cold, they just need not to be hot.

- My impression of Uptime Institute rating scales are that they provide just enough information to give the impression of being useful without providing a practical benefit. The goal of course being that you think you need to hire UI to guide you though the gray areas. Fine technique from a marketing point of view, but of limited use to evaluation of a DC. For example - what kinds of single points of failure are allowable in a Tier III center? If you just fix those SOP's does it become tier 4? Can an N+1 system have a SPoF? What level of granularity are you assuming when you are discussing SPoF? Systems? Components? Parts? Do you need a switchgear on each genset to be at N+1, or is one switchgear for N+1 number of gensets OK? Meaning, is the switchgear part of the system when you describe it as N+1? Again, fair enough to UI - answering these questions is what they are paid to do. But the I through IV description is, to my mind, not much better than Not so Good to Pretty Darn Good. But I am curious what others think of this. Do others find UI more useful than me - the free publications at least.

- There is a movement away from FM200 or other gas systems because they are so costly. Any savings you thought you had b/c of not damaging as much equipment (remember that you are still coating the area with some form of chemical or mist which is going to put the machines out of action anyway) is outweighed by the big up front cost and the constant cost of maintenance and high expense and time delay problem of re-charging of accidentally fired systems. A pre-action system, with no water in the pipes until the VESDA alarm activates, and zones of control - under floor, specific areas, etc, is what we are going back to.

Added Sep. 14, 2009

"This arrangement is often made to achieve N+1 Redundancy in the systems." The N+1 referred to here is often open to interpretation. What ratio is actually practical and reliable needs to be well thought out. 4:1 ratio where N=4 is a robust back up system and we often use this ratio for UPS redundancy. In some equipment such as cooling systems, a 5:1 ratio can be used, where N=5.

With Generators, the question of redundancy becomes more complex. Is N+1 actually required? After all the generators are the back up for Mains power. Do we then need a back up of a back up? Any comments would be welcome. (Ozlanka - Tokyo, Japan --Ozlanka (talk) 04:46, 14 September 2009 (UTC)Ozlanka (talk) 04:44, 14 September 2009 (UTC)) —Preceding unsigned comment added by Ozlanka (talk • contribs) 04:41, 14 September 2009 (UTC)
 * Generators are not N+1 to "backup the backup", but basically to make sure when one of them does not start up, the DC "survives" the outage. If they were running all the time, there would be no need for N+1 but it is much cheaper to keep them N+1 and powered off then keeping them running or doing the startup checks on a higher frequency than you can afford with N+1 setup. Also, the more reliable the generators layer, the more you can save on the UPS hold-up times ... 195.212.29.186 (talk) 15:19, 25 May 2015 (UTC)

External links modified
Hello fellow Wikipedians,

I have just added archive links to 1 one external link on Data center. Please take a moment to review my edit. If necessary, add after the link to keep me from modifying it. Alternatively, you can add to keep me off the page altogether. I made the following changes:
 * Added archive https://web.archive.org/20120103183406/http://blog.transitionaldata.com:80/aggregate/bid/37840/Seeing-the-Invisible-Data-Center-with-CFD-Modeling-Software to http://blog.transitionaldata.com/aggregate/bid/37840/Seeing-the-Invisible-Data-Center-with-CFD-Modeling-Software

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

Cheers.—cyberbot II  Talk to my owner :Online 03:33, 7 January 2016 (UTC)

History
Here is a revised version of the history section. Some information was taken out and a Section on the cost of space was added. Please contact me if these changes can be added to the article. Heronhaus (talk) 18:39, 17 September 2016 (UTC)

Data centers have their roots in the huge computer rooms of the early ages[when?] of the computing industry. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power, and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design-guidelines for controlling access to the computer room were therefore devised.

As information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The advent of Unix from the early 1970s led to the subsequent proliferation of freely available Linux-compatible PC operating-systems during the 1990s. These were called "servers", as timesharing operating systems like Unix rely heavily on the client-server model to facilitate sharing unique resources between multiple users. The availability of inexpensive networking equipment, coupled with new standards for network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center", as applied to specially designed computer rooms, started to gain popular recognition about this time.[citation needed]

The boom of data centers came during the dot-com bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers(IDCs), which provide commercial clients with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results. Data centers for cloud computing are called cloud data centers (CDCs). But nowadays, the division of these terms has almost disappeared and they are being integrated into a term "data center".

With an increase in the uptake of cloud computing, business and government organizations scrutinize data centers to a higher degree in areas such as security, availability, environmental impact and adherence to standards. Standards documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data-center design. Well-known operational metrics for data-center availability can serve to evaluate the commercial impact of a disruption. Development continues in operational practice, and also in environmentally-friendly data-center design.

Cost of space. Real estate prices vary greatly according to the geographic location of the data center. For example, in early 2003, the commercial property prices in San Francisco were almost double those in other markets, such as Chicago A comprehensive data center cost model must account for such variance in real estate price. o Recurring cost of power. The electricity costs associated with continuous operation of a data center are substantial; a standard data center with a thousand racks spread over an area of 30,000 ft2 requires about 10 MW of power for the computing infrastructure. The direct cost of drawing this power from the grid should be included. o Maintenance, amortization of the power delivery, conditioning and generation. Data centers are a critical resource with minimally affordable downtime. As a result, most data centers are equipped with back-up facilities, such as batteries/fly-wheel and on-site generators. Such back-up power incurs installation and maintenance costs. In addition, the equipment is monitored continuously, and costs associated with software and outsourced services must be included.

External links modified
Hello fellow Wikipedians,

I have just modified 12 one external links on Data center. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20111106042758/http://www.tiaonline.org/standards/ to http://www.tiaonline.org/standards/
 * Added archive https://web.archive.org/web/20100613072610/http://uptimeinstitute.org/index.php?option=com_docman&task=doc_download&gid=82 to http://uptimeinstitute.org/index.php?option=com_docman&task=doc_download&gid=82
 * Added archive https://web.archive.org/web/20091007121511/http://professionalservices.uptimeinstitute.com:80/UIPS_PDF/TierStandard.pdf to http://professionalservices.uptimeinstitute.com/UIPS_PDF/TierStandard.pdf
 * Added archive https://web.archive.org/web/20101122074817/http://emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf to http://www.emersonnetworkpower.com/en-US/Brands/Liebert/Documents/White%20Papers/Evaluating%20the%20Economic%20Impact%20of%20UPS%20Technology.pdf
 * Added archive https://web.archive.org/web/20101122035456/http://www1.eere.energy.gov:80/femp/pdfs/data_center_qsguide.pdf to http://www1.eere.energy.gov/femp/pdfs/data_center_qsguide.pdf
 * Added archive https://web.archive.org/web/20110728032834/http://www.smart2020.org/_assets/files/03_Smart2020Report_lo_res.pdf to http://www.smart2020.org/_assets/files/03_Smart2020Report_lo_res.pdf
 * Added archive https://web.archive.org/web/20160618081417/http://ecoseed.org/en/business-article-list/article/1-business/8219-i-t-industry-risks-output-cut-in-low-carbon-economy to http://ecoseed.org/en/business-article-list/article/1-business/8219-i-t-industry-risks-output-cut-in-low-carbon-economy
 * Added archive https://web.archive.org/web/20100925210539/http://emerson.com/edc/post/2010/06/15/Introducing-EPA-ENERGY-STARc2ae-for-Data-Centers.aspx to http://www.emerson.com/edc/post/2010/06/15/Introducing-EPA-ENERGY-STARc2ae-for-Data-Centers.aspx
 * Added tag to http://re.jrc.ec.europa.eu/energyefficiency/html/standby_initiative_data_centers.htm
 * Added archive https://web.archive.org/web/20101027083349/http://content.dell.com:80/us/en/enterprise/d/large-business/measure-data-center-efficiency.aspx to http://content.dell.com/us/en/enterprise/d/large-business/measure-data-center-efficiency.aspx
 * Added archive https://web.archive.org/web/20080519213241/http://www.datacenterknowledge.com:80/archives/2008/May/15/ciscos_mobile_emergency_data_center.html to http://www.datacenterknowledge.com/archives/2008/May/15/ciscos_mobile_emergency_data_center.html
 * Added archive https://web.archive.org/web/20080611114732/http://www.crn.com:80/hardware/208403225 to http://www.crn.com/hardware/208403225
 * Added archive https://web.archive.org/web/20130531191212/http://hightech.lbl.gov/documents/data_centers/modular-dc-procurement-guide.pdf to http://hightech.lbl.gov/documents/data_centers/modular-dc-procurement-guide.pdf

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.— InternetArchiveBot  (Report bug) 07:52, 7 December 2016 (UTC)

Extensive rewrite needed
Not just for the copyvios, but do we really need seven sections on design? Timtempleton (talk) 18:42, 28 February 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 5 external links on Data center. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20120223225537/http://hightech.lbl.gov/DCTraining/strategies/mam.html to http://hightech.lbl.gov/dctraining/strategies/mam.html
 * Added archive https://web.archive.org/web/20111203145721/http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability to http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability
 * Added archive https://web.archive.org/web/20120416120624/http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf to http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf
 * Added tag to http://www.bull.com/extreme-computing/mobull.html
 * Added archive https://web.archive.org/web/20060929131812/http://hightech.lbl.gov/datacenters.html to http://hightech.lbl.gov/datacenters.html
 * Added archive https://web.archive.org/web/20110723081149/http://hightech.lbl.gov/dc-powering/faq.html to http://hightech.lbl.gov/dc-powering/faq.html

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 06:11, 5 September 2017 (UTC)

Anderson connectors
Seems pollution of the page (at the end of See also), but I may miss something, so I dare not wipe it. --Dominique Meeùs (talk) 16:53, 16 December 2017 (UTC)

Extensive rewrite needed
Not just for the copyvios, but do we really need seven sections on design? Timtempleton (talk) 18:42, 28 February 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 5 external links on Data center. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20120223225537/http://hightech.lbl.gov/DCTraining/strategies/mam.html to http://hightech.lbl.gov/dctraining/strategies/mam.html
 * Added archive https://web.archive.org/web/20111203145721/http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability to http://www.datacenterjournal.com/home/news/languages/item/2792-the-price-of-data-center-availability
 * Added archive https://web.archive.org/web/20120416120624/http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf to http://www.apcmedia.com/salestools/VAVR-626VPD_R1_EN.pdf
 * Added tag to http://www.bull.com/extreme-computing/mobull.html
 * Added archive https://web.archive.org/web/20060929131812/http://hightech.lbl.gov/datacenters.html to http://hightech.lbl.gov/datacenters.html
 * Added archive https://web.archive.org/web/20110723081149/http://hightech.lbl.gov/dc-powering/faq.html to http://hightech.lbl.gov/dc-powering/faq.html

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 06:11, 5 September 2017 (UTC)

Anderson connectors
Seems pollution of the page (at the end of See also), but I may miss something, so I dare not wipe it. --Dominique Meeùs (talk) 16:53, 16 December 2017 (UTC)

Merge from "Data availability"
Half of this information is part of the "Data Center" article; merging into "Data Center" so that ALL of the related info is in one place Pi314m (talk) 07:12, 28 October 2018 (UTC)

Merge vendor list under applications to Disaster Recovery's new Vendors section
The "Disaster recovery" article does not discuss the matter of which vendors exist, nor links re this, hence it made sense to move this information to where it better fits. A link from here to this is part of this edit. Pi314m (talk) 10:00, 28 October 2018 (UTC)

Section "Top data centers and service providers worldwide"
This section should be deleted from this article. If relevant it should be moved to a new List of .... --Zac67 (talk) 19:00, 1 November 2018 (UTC)

CV / CopyVio vetting
This page was already reviewed for CopyVio. The edit described as
 * "Reverted to revision 862476050 by TwoTwoHello" "More CV issues. restoring las known good."

is bogus in that
 * 1) it ADDED +17,221 characters, making it larger than it was before
 * 2) the rackspace.com matter was cleared up-
 * the CopyVio was not in the article text in the first place, but rather in the "|quote=" texts;

that mistake on my part was not repeated in my subsequent edits. Pi314m (talk) 07:19, 16 November 2018 (UTC)

Trimming sections with " "
Sections long enough to be an article that (also) have " " can/should be trimmed. Pi314m (talk) 20:12, 18 November 2018 (UTC)

Reference is no longer alive
reference 25 is no longer alive. — Preceding unsigned comment added by 2607:9880:1980:1DE:9C7B:FD6F:1F37:D911 (talk) 23:21, 11 March 2021 (UTC)

The History is just WRONG
ENIAC had a sister in England not mentioned and it was a computer technology development center, not a storage center. Xerox Park wasn't a storage center but spurned both Apple desktop and decades later Wind95.

Texas taxpayers paid for a huge "census computer", it spurned "SCSI" - a major item. That was a data center.

Unwaveringly: AT&T long-distance center, AT&T Unix, and its magnetic tape drives:  were active data centers before hard disks existed.

There may have been "military data silos" before AT&T, but these did not share data. But there is no citation for these. AT&T used data for long distance - which was digital long before people knew what digital transfer was.

The first PUBLIC INTERNET DATA CENTER I can remember was: Sun Microsystems (Solaris) FTP site and, sometime after, Ibibio. Those besides, colleges (mostly in the USA or who were sold IBM overseas) used to offer "Data Center" services to the public when "the internet" was not yet in the dictionary.

Sun's datacenter used 1/2 of all of an electric plant's power by the time it was shut down for more a energy efficient center. — Preceding unsigned comment added by 2601:143:480:A4C0:4ECC:6AFF:FE8E:47D (talk) 22:33, 8 February 2022 (UTC)