First of all we need to understand what a Data Centre is and how it has changed. A data centre is a facility used to collate, store, process and distribute data, to enable an organisation to make decisions, both on an ongoing daily operational basis and a long to medium term strategic basis.
Back in the 60s and 70s, Data Centres comprised large mainframe computers using magnetic tapes and spools of punched tape to run eight bit programs. Even NASA used this level of technology to mount the Saturn 5 missions and the two Voyager missions to the outer solar system.
The data centres used to house the equipment was sizable and the environment they sat in was fairly basic.
Fast forward fifty to sixty years, the computing and storage capacity of today’s mobile devices far outstrips anything they could even dream of back then. Even your desktop or laptop routinely uses a 64 bit operating system or UNIX uses a 126 bit system. Today a cabinet sitting in a broom cupboard could easily do all the processing for all the Saturn 5 and both Voyager missions simultaneously.
Currently, if one is charged with creating a data centre, you need to understand what it is going to be used for and which level (Tier) of centre is envisaged.
Ask them up front. “What do they want in this facility?” What level of redundancy do they want?” “Will the facility be theirs or outsourced to a third party?” Will it be mirrored?” and “What is the budget?”: at the minimum.
It is all well and good being told the budget is available. It is not nice having to correct the expectations of the organisation asking you to provide the solution, because they do not understand the complexity and costs associated with this sort of investment. And we are not just talking about CapEx , but also the OpEx.
Typically the clients are Enterprise, Multi-National Corporate, Local Government or Central Government Departments and possibly the new consolidated Super Government Departments currently being proposed by the UK Government.
I won’t cover the geographical, topographical or environmental sighting of such centres, this I take as given, although people still make such mistakes. I also will not cover the security aspect, as I have previously dealt with security.
Having ascertained which type of data centre is needed, one then has to decide if it is a proprietary centre owned by the organisation or a third party option. The third party option can be further divided into three main types, outsourced, Colo (collocation) facility, or Cloud. There is a third option. This was originally conceived for the military and is often referred to as a modular or containerised. In its military guise it may consist of one or more 40ft container with associated large generators and communications vehicles; you may find bouncing across the Mojave /Open Quarter/Negav desert or Afgan Plain. Within the commercial world this may be used for a temporary solution to meet a specific need or for an organisation which moves around (Mining, Petroleum or the like).
Basically data centres come in 4 Tiers, as defined by Uptime Institute. Tier 1 is the simplest infrastructure, while Tier 4 is the most complex and has the most redundant components. Each tier includes the required components of all the tiers below it.
Data centre tiers are an efficient way to describe the infrastructure components being utilised within the organisation’s data centre. Although a Tier 4 data centre is more complex than a Tier 1 data centre, this does not necessarily mean it is best-suited for a business’s needs.
Has a single paths for power, cooling and communications. Few, if any, redundant and backup components. It has an expected uptime of 99.671%, with 28.8 hours of downtime annually.
Has a single paths for power, cooling and communications. There are some redundant and backup components. It has an expected uptime of 99.741%, with 22 hours of downtime annually.
Has multiple paths for power, cooling and communications. Systems are in place to update and maintain it without taking it offline. It has an expected uptime of 99.982%, with 1.6 hours of downtime annually.
Is constructed to be completely fault tolerant and has redundancy for every component. It has an expected uptime of 99.995%, with 26.3 minutes of downtime annually).
Even a small data centre/computer suite or modular unit can be constructed to a Tier 4 specification, if absolutely required.
Let us briefly look at rationality of the type of centre used:
The proprietary centre owned by the organisation. We all accept a lot of organisations are migrating more of their computing to the cloud, but some are still maintaining critical IT assets on premises. Why move to the cloud? They do so for strategic, financial, and operational flexibility and scalability. Yet, there are six reasons why critical assets are still being maintained on site.
Try standing in front of your board and/or the shareholders, and you’ve had to admit your systems were down or how why the network got invaded by hackers. Having direct management of your mission-critical IT assets is still best practice in companies.
- Safekeeping of IP
If your firm is a developer of products, the cost of developing custom applications which are highly proprietary and contributory to competitive advantage, this can be an unacceptable computing model.
- Security and Governance
In regulated industry your auditor will ask you about IT security and you admit to using third-party cloud-based systems, they will ask see the cloud provider’s third-party audit report.
- Uptime and DR
I have been an engineer working on a Trade Floor when systems went down; it is not a pretty sight. I have also had to pick up someone else’s DR when it failed; again not pretty. Just glad it was not me having to explain.
- Support services
When systems are remote, calling for support on automated phone or chat system can result in user frustration. Whereas, when there are local representatives who will visit the user and know the bespoke application, a level of expertise that a more generic cloud-based service simply can’t provide.
- No relationship with vendor community
What happens if the so-called vendor of your cloud service is actually a third party provider; i.e., they do not own the service. If the service fails, you don’t have a legal leg to stand on as you do not have a direct contract with the actual provider. This increases your liability and business risk.
The Third Party or Outsourced
There are several ways we can look at third party arrangements. First is reduction of Cap-Ex investment, second is a potential reduction of Op-Ex and a real reduction in head-count. After that we then look at the opportunities it potentially provides.
Construction of the site must be such that it is able to withstand a natural disaster. Generally a Colo data centre will provide the client a SLA guaranteeing specific availability or up-time, but up-time is normally expressed in terms of the four tiers. These centres commonly express efficiency scores as a power usage effectiveness (PUE), with a good PUE score being environmentally friendly and provides lower costs for power.
Depending on the pricing structure the Colo utilises when billing the client, it is normally based on rack space or by the square foot. Other factors which may influence cost are, bandwidth usage, power consumption, geographic and/or technical support.
Each supplier will have their own way of doing things. So Colo hosts commonly provide security, durability, reliability, cross connectivity, redundant power, compliance with regulatory bodies and onsite support. The advantage of Colo Hosting are cheaper than building or expansion of existing sites. Additional space is available to accommodate growth. These sites have strict security protocols, which may also provide additional protection from cyberattack. In addition to this it allows the client to provide their own technology.
This is also sometimes referred to as a fully managed service. The main difference between an outsourced model and Colo, relates to the fact the host provides all the technology and all the technology service support. The pricing structure will be similar, as will be the contract duration.
Likewise the fabric will still need to be able to withstand a natural disaster. Depending on the up-time/PUE, agreed with the hosting company, will dictate the hardware deployed to provide the level of resilience required. As in the Colo, the outsourced solution will also have strict security protocols, which may provide additional protection from cyberattack.
Cloud solutions come in three basic forms: Software-as-a-Service (SaaS), Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS). There are some distinct differences between these three models and what they can offer a business in terms of storage and resources.
Software-as-a-Service is one of the most popular forms of cloud computing. This method of software delivery allows data to be easily accessed from any device with an internet connection and web browser. SaaS is a complete software solution that companies can buy from a cloud service provider on a pay-as-you-go basis.
With Software-as-a-Service, a business rents the use of an app for their organization and its users are able to connect to it over the internet. The service provider is responsible for managing the software and hardware, as well as for keeping the app secure and up-to-date. With SaaS, businesses are able to get their business up and running quickly and at a minimal cost.
Infrastructure-as-a-Service is another form of cloud computing that delivers fundamental network, storage and computer resources over the internet on a pay-as-you-go basis. In an IaaS model, the cloud provider hosts the components of the infrastructure that are traditionally present in an on-premises data centre, such as storage, servers and networking hardware.
IaaS providers may also provide other key services, such as log access, detailed billing, security, monitoring, backup, recovery, replication and load balancing and clustering. Services and resources are accessed through a wide area network (WAN), such as the internet.
Platform-as-a-Service differs from SaaS and IaaS as businesses rent everything that they need from a cloud provider to build an application, such as infrastructure, development tools and operating systems. PaaS aims to simplify the web application development process by allowing the cloud provider to handle all the backend management.
Businesses can access PaaS over any internet connection, which makes it possible to build entire applications in a web browser. The development environment is not locally hosted which means that developers can work on the application remotely from any location. Although developers have less control over the development environment, they can enjoy significantly less overhead.
Modular or Containerised Data Centre
A modular data centre system is a portable method of deploying data centre capacity, anywhere it is needed. Originally designed for military use. Modular data centres (MDCs) use standardised critical infrastructure systems and pre-engineered containers or cabins to provide a data centre that can be transported to its point of use and is more easily scalable than a traditional ‘bricks and mortar’ facility. Typically used by NGOs, commercial operations (mining and oil exploration, etc) deploying to areas without formal infrastructure and government relief efforts. They are also ideal for use when organisations need to quickly grow, to meet immediate needs and existing infrastructure will not meet commercial demands placed upon it.
This modular system can be used in the normal data centre format, as an extension to your existing estate, albeit contracted in for a specified time period. It can also meet the needs of Software-as-a-Service (SaaS), Infrastructure-as-a-Service (IaaS) or Platform-as-a-Service (PaaS), dependant on client needs.