Categories: Servers

Server Virtualisation explained

This is the first in a 10 part series on Server Virtualisation.

You’d probably be surprised to learn that the concept of virtualisation actually goes all the way back to the infancy of computing when in the 1960s virtual partitioning was used to allow segregated space on the same storage medium. In much more recent years, many businesses have begun to embrace the continuously growing benefits of server virtualisation. Whether it be on-premise virtual infrastructure or cloud-based hosting, the expanding number of available platforms and competition between technologies has made virtualisation options increasingly attractive for businesses of any size.

So what is virtualisation? For those new to the concept; virtualisation is a rapidly developing technology that combines hardware and software engineering that is fundamentally changing the way people make use of their IT infrastructure. In simple terms, hardware virtualisation is the ability to run your Windows or Linux server infrastructure as operating systems running on a software layer, known as a hypervisor, over underlying physical hardware. What this means is that previously the only option was to run physical hardware for each server you required whereas you now have the ability to run many servers in a software-container on a single piece of hardware.

Example: You are looking at refreshing your current IT infrastructure which consists of three physical servers: A domain controller, Exchange server and an SQL server. Instead of purchasing new hardware for each, you decide to purchase a single piece of hardware and run all three, still as individual servers but now running as a software instance known as a “guest”, on that one physical device or “host”.

There are a number of methods for performing bare-metal conversions (physical to virtual) each varying depending on the hypervisor you decide to deploy but the benefits of virtualising your servers in this way applies to all.

  • Consolidation of servers into virtual machines greatly reduces the amount of physical hardware required as well as the associated costs of space, power, cooling and maintenance requirements.
  • Virtual machine density (i.e. the number of guests capable of running and performing as required on any given host) can be increased by refining required virtual resources across the virtual machines allowing you to potentially run more guests on the same hardware.
  • Servers running as virtual machines will start/restart faster and can dynamically adjust their memory and storage resources with no downtime.
  • Management and monitoring complexities of IT infrastructure is greatly reduced and can be controlled from a single pane of glass.
  • Virtual infrastructure deployments are created with expansion in mind providing fast new server provisioning and scalability as needed.
  • Extending the life of older/custom applications that can run alongside newer operating systems on the same hardware.

By now you are probably thinking “Doesn’t grouping all my servers onto one physical piece of hardware create a single point of failure for all?”. This is where clustering comes in. Most virtualisation deployments will include at least two hypervisor nodes containing either local storage or connected to one or more SANs creating redundancy at both the virtual machine and storage level. A deployment of this type can and often does include considerations for high-availability as well as redundancy. The exact technologies available vary between hypervisors and the level of licensing but common HA functions include:

  • The use of virtual machine replicas (offline continuously updated copies) residing on multiple hosts able to be automatically failed over in the event of one or more hosts being lost.
  • The shifting of running virtual machine workloads including storage from one host to another without disruption.
  • Hybrid deployments incorporating both on-premise and cloud virtualisation providing geographical redundancy for mission-critical systems.

A clustered infrastructure implementation can be suitable for any sized business but is more common in medium to enterprise-scale deployments. Having said that, it is understandable that most smaller businesses may only want to budget for a single virtual host when planning for a hardware refresh and whilst it does present a single point of failure it can and should be complimented by a robust backup and disaster recovery solution. Ultimately the final configuration must be reviewed on a case by case basis and determined whether or not the inherent risks are acceptable for the business.

Next week: Virtual Storage.

Share
Mark Farrell

Published by
Mark Farrell

Recent Posts

Top 6 Tips for Effective Working from Home

With the outbreak of COVID-19, we are increasingly having conversations with our clients about working…

5 years ago

DHS Gives The Latest Mandatory Policy on Medical Data Management in Australia for 2020

The medical data management system in Australia is not where it should be. According to…

5 years ago

5 Surprisingly Simple Email Hacks to Save You Time

Not everyone has the luxury of 0 unread emails. Australia right now is on the…

5 years ago

4 Alarming Signs Your MSP/ IT Team Is Unprepared for CyberSecurity this 2020

One of the most pressing tech issues for businesses, especially in Australia, is cybersecurity. Therefore,…

5 years ago

6 Biggest Threats that Your MSP Should Prepare You for This 2020

The Hard Truth About Malware in Australia (The statistics in this section are findings from…

5 years ago

10 Crucial Apps That can 10x Your Windows 10 Performance

Many new apps will help increase your productivity and make getting the job done a…

5 years ago