To deploy a new application we now choose amongst available shared resources in a data center that is any collection of CPU, memory, network and storage resource pools, on premise or in the cloud.
The traditional server-network-storage model does not exist anymore.
This enormous flexibility and increasing independencies can cause simultaneous multi-point failure. A security vulnerability in any layer can render all physical or virtual machines vulnerable. A single server hotspot in a rack can impact operability on multiple machines and seriously expose business service availability.
IT infrastructure is converging while system management tools remain with individual functions for availability, performance, security (SIEM), change monitoring and log management. Separate tools monitor virtualization and cloud services, too.
The lack of unified data model, common analytics and the ability to exchange and enrich events on a lower level within legacy management tools requires its own expert to perform true root-cause analysis and makes it impossible to be proactive.
Combining tools in a common user interfaces does not solve these issues.
We now have next-gen innovative solutions with a different view, considering infrastructure as an engine that generates data, so a system could learn that infrastructure, understand its data and the domain from which it comes, correlate automatically across multiple domains, and create alerts that traverse issues across those domains.
This is what we call a business service approach to IT infrastructure management.
An approach that connects the dots and accelerates understanding.