vmware lab

Upload: ven777

Post on 07-Apr-2018

239 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Vmware Lab

    1/23

    VMware vSphere offers some extremely powerful virtualization technology for businesses and enterprises to use.If you are still new to the technology or even virtualization in general and are looking to get a lab or testingenvironment set up to try it out, the task can be quite daunting. Sure, you could go ahead and install one ofVMwares hypervisors on a single server and throw a few Virtual Machines (VMs) on this, storing them on localstorage to get started, but this way you dont get to see the really cool features vSphere has to offer. Thesefeatures are such things as High Availability for your Virtual Machines, Distributed Resource Scheduling tomanage resources, and the potential energy efficiency gains that become apparent once you start runningmultiple VMs on a handful of hosts are just a few great features a fully configured vSphere vCenter environmentcan offer

    To setup a vSphere cluster, the bare essentials you will need are:

    at least two host servers (hypervisors), shared storage, a Windows domain and a vCenter server to link everything together.

    This would normally be quite a bit of physical hardware to source for a test lab or demo; however it is possible tocreate all of this in a nested (Running Virtual Machines inside of other Virtual machines) vir tualized set, allrunning on top of a normal Windows host PC.

    In this article, well go through setting up a VMware lab environment, all hosted and run off one PC. As this is alab environment, you should note that this will in no way be tuned for performance. To even begin tuning this forperformance, you would need dedicated hypervisors, shared storage (Fibre Channel or iSCSI SAN) andnetworking equipment as you would use in a production environment.

    Once we have our lab environment set up, youll be ab le to create VMs, use vMotion to migrate them across yourHost servers with zero downtime, as well as test VMware HA (High Availability) and DRS (Distributed ResourceScheduler) two of the great features I mentioned above as being some of the products s trongest selling points.

    Prerequisites

    This is a list of what we need (and what we will be creating in terms of VMs) to satisfy our requirements for a fullyfunctional lab environment.

    The Physical Host PC:o At least a dual core processor with either AMD-V or Intel VT virtualization technology

    support.

    o 8 GB RAM minimum preferably 10 to 12 GB.o VMware Workstation 7.x.o A 64-bit operating system that supports VMware Workstation (such as Windows 7 for

    example).

    o Enough hard disk space to host all of our VMs (I would estimate this at around 100 GB). The VMs we will create and run in VMware Workstation on the Physical Host PC:

    o ESXi VM #1 (Hypervisor)o ESXi VM #2 (Hypervisor)o Windows Server 2003 or 2008 Domain Controller

  • 8/3/2019 Vmware Lab

    2/23

    o Windows Server 2003 x64, 2008 x64, or 2008 R2 x64 vCenter servero Shared Storage VM (I use the excellent, opensource FreeNAS)

    As you can see, we do need a fairly beefy host PC as this will be running everything in our lab! The mainrequirement here is that the CPU supports the necessary virtualization features and the machine has a lot ofRAM. For VMware Workstation, you could use a 30 day trial version, but a license is not very expensive and isvery useful to have for other projects. A free alternative may be to use VMware Server 2.0, although I have nottested this myself.

    Our first two VMs that we will create will have VMware ESXi 4.1 installed on them. You can register for anddownload a 60 day trial of ESXi from VMwares website. The next machine we need to create will be our Windows Server 2003 or 2008 Domain Controller, with Active Directory and DNS Server roles installed on it.Thirdly, we need a VM to run VMware vCenter Server. This is the server that provides unified management for allthe host servers and VMs in our environment. It also allows us to view performance metrics for all managedobjects, and allows us to automate our environment with features like DRS and HA. Lastly, we will need sharedstorage that both of our ESXi hosts can see. In this lab environment we can use something simple like FreeNASor OpenFiler two opensource solutions that allow us to create and share NFS or iSCSI storage. In this article we

    will opt for FreeNAS and will be using this to provide NFS storage to our two ESXi hosts.

    For networking, I will be using VMware Workstations option to Bridge each VMs network adapter to thephysical network (i.e. the network your physical host PC is using). Therefore all vSphere host, management andstorage networking will be on the same physical network. Note that this is definitely not recommended in aproduction environment for both performance and reliability reasons, but as this is just a lab setup, so this wontbe a problem for us.

    Installation

    Start off by installing VMware Workstation on your host PC. Once this is running, use the New Virtual Machinewizard to create the following VMs.

    Windows Server Domain Controller . I used an existing Domain Controller VM I already had. If youdont already have one to use for your lab environment, create a new Window Server 2003 or 2008 VM,put it on the same network as your host PC (all your VMs will be on this network) and install the DomainController role using dcpromo. You will also need something to manage DNS, so make sure you alsoinstall the DNS server role. Specifications for the VM can be as follows:

    Windows Server 2003 version

    o 1 vCPUo 192 MB RAMo 8 GB Hard driveo Guest operating system Windows Server 2003 (choose 32 or 64 bit depending on your

    OS)

    Windows Server 2008 version o 1 vCPUo 1 GB RAMo 12 GB Hard drive

  • 8/3/2019 Vmware Lab

    3/23

    o Guest operating system Windows Server 2008 (choose 32 or 64 bit depending on yourOS)

    To keep this lab nice and simple, your domain controller should exist on the same network as all ofyour other VMs. Therefore, assign it a static IP address on your local physical network. My labnetwork is sharing the local physical network at home with my other PCs and equipment. (I simplybridge all VMs to the physical network in VMware Workstation). Therefore I set my DC up with an IPaddress of 192.168.0.200, a subnet mask of 255.255.255.0 and pointed the default gateway to myrouter. DNS is handled by the same server with its DNS role installed, so this is set at localhost(127.0.0.1) once the DNS role is installed

    Figure IP settings for the Domain Controller

    When running dcpromo , you can choose most of the default options to set up a simple lab domain

    controller. Here is a guide I did a while back on creating a set of Domain Controllers for use as labDCs. Note that you only really need one DC here as we are not going for high availability, so followthe steps through to create a primary DC. For my lab I used the following basic settings:

    o Server / host name of VM your choiceo Domain in a New Foresto Full DNS name for new domain: noobs.local o Domain NetBIOS name: NOOBS

    http://www.shogan.co.uk/?p=306http://www.shogan.co.uk/?p=306http://www.shogan.co.uk/?p=306http://www.shogan.co.uk/?p=306
  • 8/3/2019 Vmware Lab

    4/23

    o Install and configure the DNS server on this computer, and set this computer to use theDNS server as its preferred DNS server.

    After setting up your DC, open up DNS Management and get your two A (Host name) recordscreated for your future ESXi hosts. Use these FQDN hostnames when you configure your ESXi hostDNS names later on using the ESXi console networking configuration.

    Figure 2: Creating A records for our two ESXi hosts on the DNS Server.

    2 x ESXi 4.1 Hosts . These hosts will be our workhorses they will be managed by vCenter Server andwill be the hypervisors that run our nested Virtual Machines. HA and DRS will look after these two hosts,managing high availability of Virtual Machines if one were to fail and distributing resources between thetwo based on their individual workloads. Note that the minimum amount of RAM required for ESXi is 2GB, but the HA agent that needs to install on each host will be likely to fail to initialize if we dont have atleast 2.5 GB RAM per host. In a production environment, these hosts would be physical rack-mountedservers or blade servers. Create two of these VMs using the new custom VM wizard and give each ESXiHost the following specifications:

    o 1 vCPUo 2560 MB RAM (More if you can, as each VM that the host runs requires RAM from the

    ESXis pool of RAM) o 40 GB Hard drive (Thin provisioned)o Networking use bridged networkingo 1 x extra vNIC (Network adapter).o Guest operating system VMware ESX

    Dont forget to add the additional NIC using Bridged mode this is so that we can simulatemanagement network redundancy (and therefore not get an annoying warning in vCenter about not

  • 8/3/2019 Vmware Lab

    5/23

    having this in place). It wont be true redundancy, but will at least keep vCenter happy. Beforecompleting the wizard, choose the Customize Hardware button to add this extra NIC. Here is ascreenshot of what your VM settings should look like, followed by a screenshot showing the secondNIC being added.

    Figure 3: Settings summary for your ESXi VMs

  • 8/3/2019 Vmware Lab

    6/23

    Figure 4: Adding an extra NIC to each ESXi host VM.

    For the installation ISO, download the 60 day trial of ESXi from VMwares website in ISO format.. Youll need to register a new account with VMware if you dont already have one. You may alsoreceive a free product key for ESXi when setting up our ESXi hosts, dont use this, as we want tokeep the full functionality that the trial version gives us, leaving it in 60 day trial mode. Specify theinstallation ISO in the CD/DVD Drive properties on the ESXi host VM or when the new VM wizardasks for an installation disc.

    Power up the host VM, and follow the prompts of the ESXi installer. Whilst there are some bestpractises you can follow and specific steps you can take when installing hosts in a productionenvironment, for the purposes of building our lab environment we can just leave all options at theirdefaults. So step through the installation wizard choosing all the defaults. After this is done, followinga reboot, you should be greeted by the familiar yellow ESXi console screen. Your default passwordis blank (i.e., empty) for the user root. Press F2 and login to the configuration page with thesecredentials.

    First things first, lets change the root password from the default blank, to something else (ConfigurePassword). After this, navigate to the Configure Management Network option, and then choose IPConfiguration. Your host should be bridg ed on to your physical network and would have probablypicked up an IP from your local DHCP server. Change this to a static IP address on your localnetwork, and specify your default gateway.

  • 8/3/2019 Vmware Lab

    7/23

    Figure 5: Setting a static IP address for ESXi on your network.

    Now, go to DNS Configuration and ensure your DNS server(s) are specified. Change your hostname to the same name that you created A records for on your Windows Server DNS (ensurethe IP you chose for your host is the same as specified for your A record too). Press Enter, then ESCand when prompted, choose to restart the management network to apply changes.

    Follow the same procedure above for your second host, choosing a different static IP and hostnameon the same network this time, but keeping all the other options the same. Remember the FQDNhost name records for each ESXi host we created in DNS Management on our DNS server earlier?Well, well now test both of these A records by pinging the hostnames of our ESXi servers from your Windows Server Domain Controller to ensure they both resolve to their respective IP addresses.

    Remember that we are keeping all of our VMs on the same local subnet. In my lab, I used IPaddresses of 192.168.0.x with a subnet mask of 255.255.255.0 for all my VMs. Therefore my ESXiFQDN host names in DNS resolved to 192.168.0.80 and 192.168.0.81.

    FreeNAS VM (shared storage). Now we will need to setup our shared storage which both ESXi hostsneed to have access to in order to use clustering capabilities such as HA and DRS.

    You can follow this guide to download a preconfigured VM for VMware Workstation and get everythingyou need for your NFS shares setup. It also has a section near the end that demonstrates adding theshared storage to each ESXi host. When you download the FreeNAS VM as explained in this guide,youll just need to use VMware Workstation to open the .vmx file that comes with the download to getgoing.

    vCenter VM. This VM requires a 64-bit Windows Server 2003 or 2008 / R2 Guest operating system. Soget a VM setup with one of these, and add it to your Windows domain. Once you are ready to begin,register for and download the vSphere vCenter Server software from VMware if in ISO format, justattach this to your VM, by going into your VMs settings in Workstati on and connecting it. Start theinstaller and again, we will be following through with all the default options as this is just a lab setup. Tobegin with just ensure you select the Create a standalone server instance for vCenter installation. Youwill get to the Database section in the setup soon choose the default of a SQL Server Expressinstance for the database. This is fine for smaller deployments of ESX/ESXi hosts and vCenter. Larger

    http://sysadmin-talk.org/2011/04/create-your-own-network-storage-solution-using-freenas/http://sysadmin-talk.org/2011/04/create-your-own-network-storage-solution-using-freenas/http://sysadmin-talk.org/2011/04/create-your-own-network-storage-solution-using-freenas/http://sysadmin-talk.org/2011/04/create-your-own-network-storage-solution-using-freenas/
  • 8/3/2019 Vmware Lab

    8/23

    production environments would usually go for a SQL Server Standard or Enterprise or Oracle DB setupand specify a dedicated database server at this stage. Complete the setup and restart the VMafterwards.

    o 1 vCPUo 2 GB RAMo 40 GB Hard drive (Thin provisioned)o Guest operating system choose the appropriate Windows OS here.

    Individual ESXi host configurations. Now we need to configure each ESXi host using the vSphereclient from our host PC. We need to get them to match each other identically in terms of setup so thatHA and DRS work well between the two host servers. In a production environment you should be usinga feature called Host profiles to establish a baseline host profile, and would then be able to easilyprovision host servers off of that profile. (Feature available in vCenter Enterprise Plus only). However weonly have two hosts to do here, so well configure them manually.

    Open a web browser on your local PC, and browse to the IP address of one of your ESXi hosts using the

    prefix of https://. Youll get to a banner page, which should offer you a download of th e vSphere client forWindows. Download and install this on your local host PC. Run the vSphere client and login to your firstESXi host using the root credentials you configured earlier accept the security certificate warning youget when you click Login.

    Figure 6: Specify the IP address of the ESXi host you are connecting to and login.

    Once the management GUI appears, well be able to start configuring the ESXi host, by adding ourshared storage first. Navigate to the Storage Configuration area for your host and click the AddStorage... link near the top right of the GUI.

  • 8/3/2019 Vmware Lab

    9/23

    Figure 7: Adding storage to our first ESXi host

    As we have configured an NFS share for our storage, well choose the option for Network FileSystem. On the next page, well enter the details about our NFS share we want to connect to.

  • 8/3/2019 Vmware Lab

    10/23

  • 8/3/2019 Vmware Lab

    11/23

    Click Next, review the summary page to ensure you are happy, then finish the wizard to completeadding your Shared Storage. Remember to keep the Datastore Names the same across all yourESXi hosts for consistency. Under Datastores in the vSphere client, you should now be able to seeyour shared storage and the host server will now be able to use this to access and run VMs from.

    Figure 8: datastore1 refers to the ESXi's local storage. SharedDatastore1 is our shared storagewhich we'll be using.

    Next, well do the network configuration. Start by clicking Networking under the Configuration areain the v Sphere client. Click on Properties for vSwitch0 and well get our second NIC added and

    configured for standby mode. A vSwitch in VMware is a virtual switch try to think of it as alogical switch as it is not much different to the real thing!

    Figure 9: Open the vSwitch0 Properties to start configuring your host's networking.

    Click the Network Adapters tab then select the Add button. A page will appear which should listyour unclaimed virtual network adapter with the name vmnic1. (vmnic0 i s already being used as ouractive NIC). Tick this adapter (vmnic1), and then click Next.

  • 8/3/2019 Vmware Lab

    12/23

    Figure 10: Claim vmnic1 to start using it for vSwitch0.

    On the next page, well move vmnic1 down to a Standby adapter. Highlight it, then click the Movedown button and finish the wizard.

  • 8/3/2019 Vmware Lab

    13/23

    Figure 11: Assign vmnic1 as a Standby adapter.

    Now click on the Ports tab under vSwitch0 Properties and click Add... We are now going to adda VMkernel port group which is going to be responsible for our vMotion network traffic (used forVMware HA and DRS in this instance). VMkernel port groups are used to connect to NFS / iSCSI

    storage, or for vMotion traffic between hosts when moving Virtual Machines around. Configure thepage as per the screenshot below; give your VMkernel port a manual unused IP address on yournetwork along with your subnet mask (i.e. the same internal network being used for all your otherVMs), keeping your usual default gateway for the VMkernel default gateway. In my lab I am using myADSL router as the default gateway for all my network traffic, so I used the IP address of192.168.0.1. Remember to tick the Use this port group for vMotion tickbox as well want to be ableto use vMotion in our lab. Finish the wizard, which takes you back to the vSwitch0 Propertieswindow.

  • 8/3/2019 Vmware Lab

    14/23

    Figure 12: Adding a VMkernel port group.

    At the vSwitch0 Properties window, highlight your new VMkernel port group in the list and then clickEdit. Well now configure a security policy on this port group to Accept for Promiscuous mode.Click the Security tab and configure as per the screenshot below. This will allow us to use vMotionin our nested ESXi host configuration.

    Finally, click OK and then Close to complete our networking configuration for the ESXi host. Here isa summary of our vSwitch0 Virtual Switch configuration:

  • 8/3/2019 Vmware Lab

    15/23

    Another way of configuring your network adapters for vSwitch0, as Duncan Epping recommends would be to setboth adapters as Active on your vSwitch0 (provided your physical switch allows you to). In our case it does,

    because we are bridging the network adapters to our physical network via our host physical PC networkconnection. (i.e. VMware Workstation is essentially our physical switch). This configuration would then enable usto use both NICs for the types of traffic we have defined on our vSwitch instead of just one handling active trafficand one sitting in standby mode. This configuration still maintains network redundancy you can test this byrunning a VM on one of the hosts with two adapters in Active mode, setting the guest operating system in theVM to ping a location inside or outside your network, then removing one of the active NICs from your vSwitch0 you shouldnt see any d ropped packets.

    To set one of your Standby NICs back to an Active NIC, configure your vSwitch0 as per this screenshot, using theNIC Teaming tab:

    http://www.yellow-bricks.com/http://www.yellow-bricks.com/http://www.yellow-bricks.com/http://www.yellow-bricks.com/
  • 8/3/2019 Vmware Lab

    16/23

    Setting up vCenter and our Cluster

    Our final stage of configuration will now take place in vCenter, using our vSphere Client from our local host PC.Before we begin though, lets setup our PCs hosts file to add some A records for our ESXi hosts and vCenter server. These A records are for the vSphere client youll be running on your physical host to correctly identify theactual ESXi host servers. For example, when you view the consoles of your VMs in vCenter, your PC needs toknow which host to open the VM console on, so it will need to match up each ESXi host servers hostname withits correct IP address. Open the hosts file located in C:\Windows\System32\drivers\etc\hosts with Notepad andedit it to point the FQDNs of each ESXi host to their correct IP addresses. Below is an example of the hosts fileconfiguration on my Windows 7 PC which is running the entire lab. Note that I added a simple entry for myvCenter server too (noobs-vc01) so that I can connect to this name using the vSphere client instead of the IP ofmy vCenter server.

  • 8/3/2019 Vmware Lab

    17/23

    Ensure that all your lab VMs are powered up i.e. DC, FreeNAS, two ESXi hosts and vCenter (in that order too).Use your vSphere Client to connect to your vCenter server by IP or hostname. You can use your domainadministrator account to login with, although best practise is of course to create normal domain users in ActiveDirectory to use for vCenter administration. Accept the message stating you have 60 days left of your trial, andyou should land on a welcome / summary area of the GUI when the log in is complete.

    Before we can add any objects to the vCenter Server inventory, we need to create a datacenter object. This is forall intents and purposes, a container object, and can often be thought of as the root of your vCenter

    environment. The items visible within the datacenter object will depend on which Inventory view you haveselected in the vSphere client. For example, Hosts and Clusters will show your Cluster objects, ESX/ESXi hostsand VMs under the datacenter object. Click on Create a datacenter under Basic Tasks to define yourself adatacenter object an d call it anything you like. I named mine Lab -datacenter. .

    Next up, well want to create a cluster object for our ESXi hosts to be a part of. A cluster is a group of hosts (ESXor ESXi host servers) that are used for collective resource management. Wel l use this cluster to setup DRS andHA for our hosts as these features can only be enabled on clusters. Right-click on your datacenter object andchoose the option for New Cluster.

    Figure 13: Define a New Cluster under your Datacenter object.

    Now we ca n give our cluster a name. I chose Lab -cluster1. Check the boxes for Turning on VMware HA andDRS for this cluster, and then click Next.

  • 8/3/2019 Vmware Lab

    18/23

    Figure 14: Enable HA & DRS for your cluster.

    You can now choose your DRS Automation level. The cluster settings are quite easy to return to and change at alater stage if you would like to experiment with them (kind of what the lab is all about really!). So choose an

    automation level you would like for your VMs. The default is Fully Automated, which means DRS will make all thedecisions for you when it comes to managing your host resources and deciding which VM runs on which hostserver.

  • 8/3/2019 Vmware Lab

    19/23

    Figure 15: DRS Automation level settings.

    The next option page in the New Cluster wizard is for DPM (Distributed Power Management). We wont becovering this feature in our lab, so just leave it off, as it is by default.

    Well now setup VMware HA for our cluster on the next page. Host monitoring essentially monitors for hostfailures (physical, network, etc.) and when a failure occurs, monitoring allows HA to restart all of the downedESX or ESXi hosts VMs on another host server. Leave this enabled. (In a production environment, you shoulddisable this option when performing network maintenance as connectivity issues could trigger a host isolationresponse which could potentially result in VMs being restarted for no reason at all!) Well leave the AdmissionControl settings at their defaults. The next page allows you to set some default cluster settings. Leave these attheir defaults too. On the next page we see some options for VM Monitoring leave this disabled. Next up is EVC(Enhanced vMotion Compatibility). This is a very useful feature for when you have a variety of physical hosts withslightly different CPU architectures it allows for vMotion to be compatible with all hosts in your cluster byestablishing a kind of baseline for CPU feature sets. In our case, we are running virtualized ESXi hosts therefore their virtual CPUs will all be of the same type as they are all running from one physical host machine,therefore allowing us to keep this feature disabled. Keep the recommended option for storing the VM swapfilewith the Virtual Machine on the next options page, and then finish the wizard.

  • 8/3/2019 Vmware Lab

    20/23

    Figure 16: VMware HA options to configure for the cluster.

    Remember those two ESXi host servers we configured earlier? Well our cluster is now ready to have thoseservers added to it. Right- click the new cluster and choose the option to Add Host. Enter the details of your first

    ESXi host, including the root username and password you configured for the hsot earlier. Youll get a SecurityAlert message about the certificate being untrusted just click Yes to accept this, it wont bug you again. Runthrough the Add Host Wizard, leaving all the default options selected, and opting for the Evaluation mode license.Your summary page should look similar to the screenshot below. Finish off the wizard, and vCenter will add yourhost to the cluster and configure the vCenter and HA agents on the host for you. Repeat this process for yoursecond host server as well.

  • 8/3/2019 Vmware Lab

    21/23

    Figure 17: Add your ESXi hosts to the cluster by hostname (the FQDN). This is also a good test that your DNS issetup correctly.

  • 8/3/2019 Vmware Lab

    22/23

    Figure 18: vCenter configuring various agents on the host once added to the cluster.

    Closing off and SummaryYou should now have everything you need for your vSphere lab. You have two ESXi hosts in a cluster, with HighAvailability and DRS, linked to shared storage. Everything should be ready to run a few VMs now and to testvMotion / HA / DRS. Here is how my lab looks in the vSphere client after completing the setup.

    So here is the fun part get a few VMs up and running on your cluster. Right-click on one of your hosts andselect New Virtual Machine, use the wizard to create a few VMs with different operating systems. I createdanother FreeNAS VM using my ISO I had already downloaded just to play around with. If you start it up on yourfirst ESXi host, then while it is running, open a console (right-click -> open console on the VM), and then trymigrating it between ESXi hosts. You can accomplish this by right-clicking the VM, selecting Migrate thencompleting the migration wizard, by choosing your second host as the target to migrate to.

  • 8/3/2019 Vmware Lab

    23/23

    Upon completing the wizard, your VM will live migrate (using vMotion) to your second host all the while continuingto stay powered up and running whatever services has and applications are active upon it. As a fun test, why notset your guest Operating System in your test VM to ping a device on your outside network for example a routeror switch, or another PC on your network. While it is pinging this device, get it to migrate between hosts and see ifyou get any dropped packets. The worst I have seen is a slightly higher latency on one of the ICMP responses(and bear in mind this is on a low-performance lab setup!)

    You now have everything you need to test out some of the great features of vSphere and vCenter Server, allhosted from one physical PC / server! Create some more VMs, run some services / torture tests in the guestoperating systems and watch how DRS handles your hosts and available resources. Do some reading and try outsome of the other features that vSphere offers. You have 60 days to run your vCenter trial so make good use of it!If you ever need to try it out again after your trial expires, youll need a new vCenter server and trial li cense justfollow this guide again keeping everything in Virtual Machines makes setup and provisioning a breeze and youcan keep your entire vSphere lab on just one PC, laptop or server.