Granular configuration automation

Granular configuration automation (GCA) is a specialized area in the field of configuration management which focuses on visibility and control of an IT environment's configuration and bill-of-material at the most granular level. This framework focuses on improving the stability of IT environments by analyzing granular information. It responds to the requirement to determine a threat level of an environment risk, and to allow IT organizations to focus on those risks with the highest impact on performance. Granular configuration automation combines two major trends in configuration management: the move to collect detailed and comprehensive environment information and the growing utilization of automation tools.

Driving factors
For IT personnel, IT systems have grown in complexity, supporting a wider and growing range of technologies and platforms. Application release schedules are accelerating, requiring greater attention to more information. The average Global 2000 firm has more than a thousand applications that their IT organization deploys and supports. New technology platforms like cloud and virtualization offer benefits to IT with less server space, and energy savings, but complicate configuration management from issues like sprawl. The need to ensure high availability and consistent delivery of business services have led many companies to develop automated configuration, change and release management processes.

Downtime and system outages undermine the environments that IT professionals manage. Despite advances in infrastructure robustness, occasional hardware, software and database downtime occurs. Dun & Bradstreet reports that 49% of Fortune 500 companies experience at least 1.6 hours of downtime per week, translating into more than 80 hours annually. The growing costs of downtime has provided IT organizations with ample evidence for the need to improve processes. A conservative estimate from Gartner pegs the hourly cost of downtime for computer networks at $42,000, so a company that suffers from worse than average downtime of 175 hours a year can lose more than $7 million per year.

The demands and complexity of incident investigation have put further strain on IT professionals, where their current experience cannot address incidents to the scale of environments in their organizations. The incident may be captured, monitored and the results reported using standardized forms, most of the time even using a help-desk or trouble tickets software system to automate it and sometimes even a formal process methodology like ITIL. But the core activity is still handled by a technical specialist "nosing around" the system trying to "figure out" what is wrong based on previous experience and personal expertise.

Potential applications

 * Release validation – validating releases and mitigating the risk of production outages
 * Incident prevention – identifying and alerting of undesired changes; hence avoiding costly environment incidents
 * Incident investigation – pinpointing the root-cause of the incident and significantly cutting the time and effort spent on investigation
 * Disaster recovery verification – the accurate validation of disaster recovery plans and eliminating surprises at the most vulnerable times
 * Security – identifying deviations from security policy and best-practices
 * Compliance – discovering non-compliant situations and providing a detailed audit trail