[VMware] VDI Requirement Gathering

Oh! it is been a month i haven’t written a single post. Ah! Blogging is my favorite activity. I love sharing my experience and learning. This the single most platform I can express my thoughts on technical front. I don’t know how many of you like  but I don’t see that as motivation factor. In all cases I would love to hear back from you. Recently I did a VDI requirement gathering workshop with a customer. Based on various design meeting I have come across questionnaire. I would like to share with you. You will need to basic understanding VDI especially technology you are supporting. First and foremost and most important why are you looking towards VDI. Don’t start with Why question. Rather I suggest you put across a question in way a your customer understands. It is worth noting first meeting will be with IT manager, CXO. They would understand if you ask them what is the primary objective in exploring VDI options.

What is the business goals/Drivers for VDI?

Security, Cost saving, desktop refresh. These are few of the options which can help you to drive the discussion. Without understanding each of the Business drivers your conversation will be more like Q&A. It should be discussion. If desktop refresh is one of the drivers, then immediate question would be to understand if existing desktops can be reused. Are the existing desktop end of life.  Since existing desktop will be used, it is very likely user might use both the desktop. It is opportunity to ask where users will be saving their data. It would also give you insight that you need some profile migration tool in place. Since we are here, whether users are using PST and if they are storing in some central location. Here is reference on this topic. This post also provide you likely solutions

What applications will be used via VDI Desktops and What is the nature of this application?

This is most important thing I learnt from Brian Suhr book. VDI is all about apps and not about desktops alone. How you present Applications (apps) to the end user. iPad, Tablets,Phone and Cars is of utmost important. Entire focus of your discussion should around these applications. Who is using these application and what they are doing with these applications. Are there is common set of applications used across your organization? Are there heavy graphics, High I/O (Autocad, Visual Studio), Memory Intensive CPU intensive (Graphics), Recording Audio application in used. Are these application business specific, can these application be down? This discussion will help you decide 1.) if you need a multiple desktop pools 2) Do you need any application virtualization feature. This could be easily guessed, more variation in applications portfolio, more will be inclination to separating application from desktop pool . Most frequently used application can be part of standard image or can be thin app’ed. This is very well explained in Brian’s book. In each case you need to the count of users who are using this application. e.g. If photoshop users are only 5 and they just use it for light graphics you probably don’t need grid cards. If these are heavy graphic users along with considerations of Grid cards, you are very likely to consider to Monitor size and resolution. You could see how one question leads to answer to another. Now that you understand the nature of application, most critical part is how license works. e.g. Office licenses need validation and it need license management server (KMS).

Are there any users who need to install applications on local desktops other than desktop admins?

Now this is one of the use case for persistent desktops. If there are developers in your organization who need pool of applications, they obviously need administrator access and much more. As could be easily guess, you must know how many developers/users with this requirement are needed. This will drive the DR strategy for persistent desktops. Along with, you need to know how critical is their nature of work. Here you can pause and ask how frequently application refresh occurs and how applications are refreshed. This is critical piece of information as these will impact application virtualization and it’s efforts need to update. e.g. If application-A is refreshed every month (yes there are applications which are refreshed every month), and if you are proposing application virtualization for these set of application, you need to consider how are you going to ensure these updates are integrated. This is on-going cost and may vary based on complexity of application. yes I’m reading your mind “App Volumes”. Yeah!, do you need to be architect to say/propose it. Think again!!!

Are users working in shift?

If yes, what is anticipated concurrent users. This will help you decide licenses for VDI and CALs. This will also help you decide % of users who need floating desktops. e.g. if there are 300 users working in a 3 shifts, i.e. 100 users per shift. You just need 100 Concurrent user licenses, you can provide 5% allowances and procure licenses. Floating desktops is must here. CALS refer to end user CALS for desktops or RDSH if you are offering RDSH based desktops. This could be also appropriate place to understand if Terminal services licenses are there with customer

What is anticipated total users (if they are not shift users)?

This will help you identify license requirement for AV, Software licenses for Office, Desktops and other product which do not based their licensing concurrent users. You could relate difference between 300 licenses or 100 AntiVirus license.

From where end users are going to access desktops/applications?

This help you understanding how access has to be granted to the end users e.g. WAN/LAN/Internet. If there are Multiple sites, what is the required bandwidth between these sites. How users are going to access from the remote site. (thin client/Desktop/Laptop). Internet: They could be mobile users, working from home or working from office. Number of users, number of applications they need to access will have direct impact on bandwidth and latency required

Do you need access to desktop from home?

Yes it is not application access but desktop access. If yes, there is whole lot of security considerations. You need view security server or identity access appliance. Identity access appliance would be suitable if there is sufficient VMware Infrastructure in DMZ. All users would need access from Home? Do users needs two factor authentication? if Yes, RSA token is license per user. Is access from Home critical? or it is access on best effort basis. It will drive your high availability design. Again you will need restrict VDI Desktops to specific VLAN

Is user using Lync/Audio/Video user ?

Lync will have direct impact on selection of thin client. It must support Lync plug-in. Zero client definitely do not support it. Factors like features, cost, Design and performance.

Do you need USB devices/Scanner/SmartCard Readers redirection?

This is often forgotten. User need USB devices for various reason. It must be able to accommodate this requirement. In hospitality industry things become more critical when they need to move between room attending patients. This requirement will have indirect impact on your selection of endpoint device.

List down the agent installed on the desktop

  • AV Agent
  • Backup Agent
  • SCCM/LANDesk Agent3

Do you still needs these agents in VDI Desktops? Backup Agent? definitely not. You no longer would be taking desktop backups. would you?

Following questions will help you build supporting infrastructure

  1. Do you have Certificate Authority? If no, you either have to recommend one to be prerequisite(read this post from Harsha) or assist them in building one
  2. Do you have Load balancer in your existing solution? If no, you can either procure on behalf of them or ask them to in pre-requisite list? If they need active active VDI solution, then Load balancer should be intelligent to divert traffic based on source IP
  3. Do you have SRM? If DR strategy is Active-Passive, then SRM will assist in DR failover to VDI components. Refer this white paper for further details
  4. Do you have terminal server licenses? If yes, you can explore the possibility of providing RDSH to the customer for select applications
  5. Where users are storing data? local desktop/laptop? then you must considered file server in your design for user data and probably for PST as well.
  6. Do you have DHCP server at  site? Is it redundant?
  7. Are there any non-corporate users accessing desktops? e.g. Vendor, contractors? How these users prohibited from accessing corp data?
  8. How are using connecting to the network? These will have direct impact on users endpoint.

This is just the tip of iceberg. If you follow this questionnaire, I’m sure you can built your own based on your experiences. Biggest advantages of this questionnaire is, it allows you to build requirement gathering document without much effort.


[Nutanix] NPP Journey

Starting this year I choose to learn a new technology which was Nutanix. In order to start the Journey I choose to put a goal NPP as a starting goal. NPP stands for Nutanix Platform Professional. I have found putting certification as a goal is best way to learn any technology. If you focus on certification of any particular technology, you are more likely to learn new technology as here the focus is to clear the exam. Nutanix as of now offers three certification NPP, NSS and NPX .

Where to start

  1. You must install nutanix community edition and here is the best blog i have found. This is the only blog which explains the workaround if you don’t have SSD
  2. If you have budget of 35,000 INR, I strongly suggest you enroll online plus exam.
  3. At least go through PRISM WEB CONSOLE GUIDE.
  4. And youtube videos here
  5. Optionally Nutanix Bible here

Online Plus Course

First & foremost this is very unique learning approach. You are given access to nutanix course material and lab starts after 2-3 days. It allows you to read training material at least 2 days before and helps you with good head start with Nutanix. One of the best part of Online Plus course, you have access to learning material even after you have completed your course. Unfortunately it is not documented anywhere but I suspect access will remain for more than a year. You don’t need to take any notes. This course duration is for 2 weeks and you get two lectures. First lecture is about lab and second one is question and answer session with instructor. I liked the second lecture a lot. Instructor was extremely knowledgeable and source of lot of information and has been part of this blog. It was worth 35k, however if your organization is going to enter in partnership agreement with Nutanix, you get free online plus course.

The NPP exam is free to all Nutanix customers and partners, Request an account from education@nutanix.com.

Do you really need to do undergo this course. You would be surprised, it is completely optional. In fact NPP do not have any critiera. Do you want to give this exam?, just drop a email to education@nutanix.com they will send you email for exam. And it might surprise some of you, this is open book exam. You don’t have to go to any VUE/Prometric center for it. You can give this exam in group/Home/Office. It reminded me of how we use to pass compliance requirements in my previous organization. Jokes apart, you have a choice to be honest here. So this exam is free, this exam is open book. I did this course sometime in March and proud to say I have completed it within month. What next?

Preetam Zare_NPP Certification Exam (4.5)_Certificate

Reference for Nutanix Platform Pprofessional

Nutanix Certifications 

Nutanix Professional Exams

Nutanix Online Plus exam course description

Nutanix Online Plus Exam schedule


[NUTANIX] Internal CA Signed Certificate for console Access for Prism

SSL signed certificate are used to encyrpt communication between client and server. Signed Certificate ensures the server is authenticated. Self-signed certificates are not signed by 3rd Party and therefore cannot be fully trusted. For internal services, you can use internal Certificate Authority (Internal CA).Nutanix uses SSL to secure communication with a cluster and web console allows you to install SSL certificates.

Nutanix provides simplest way to configure SSL signed certificate to encrypt communication between console and server. You need Microsoft CA and openssl. Openssl can be downloaded from here. Installation of Microsoft CA is explained here. As with any step Certificate Signing Request (CSR) is first step.In order to create csr, you need openssl.cfg file. Following is the file I created. I used similar file for VMware Certificates.

[ req ]
default_bits = 2048
default_keyfile = rui.pem
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = DNS:sssnut, IP:, DNS:sssnut.shsee.com, DNS:NTNX-f8b67341-A-CVM, IP:, DNS:NTNX-f8b67341-A-CVM.shsee.com

[ req_distinguished_name ]
countryName = AE
stateOrProvinceName = AbuDhabi
localityName = ME12
0.organizationName = SHSEE
organizationalUnitName = Nutanix Services
commonName = sssnut.shsee.com

Pay special attention to line 14. Do note country codes are two letters only. I was using UAE, but was getting error while creating csr. For UAE, it is AE. Line 2 is the key length. Various key lengths supported by default. Do ensure CA you are configuring has at least 2048 key length. In cfg file I have edited only line 14, 17-22 only. Everything else remains default. After you have downloaded openssl from http://slproweb.com/products/Win32OpenSSL.html, extract as it to C:\ as shown. Take a backup of openssl.cfg.

You can refer my previous post of openssl.cfg file here

Run following command to create csr request. Do note rui.pem file is private key which is unique per request.


Browse to http://CertificateAuthorityFQDN/certsrv/

Upload CSR to Microsoft CA as shown below. Review Slide Share for detail steps


This is all needed.

Finally wish to Thank Marc for promoting my previous post. Believe me or not, post hit highest count so far. Power of social media

[VMware] Automation of Windows Server 2012 R2 using Powershell, AnswerFile

Last week I shared my learnings on building Answer file and automate Windows Server deployment on Acropolis Hypervsior [AHV]. This post is almost similar to earlier post, but it is based for deployment on VMware Platform. I really want to explain the code line by line. This would make post highly verbose. Let me keep it short and simple. You need to create a VM to install a Operation system. For Virtual machine you need a mandatory input e.g. vCPU, vRAM, Storage, GuestOS, Datastore and CD ROM (for my automation workflow you need two CDROM). After Virtual machine is created , attach Operating System ISO. My script assumes you already have ISO uploaded into datastore. Below is over all workflow


For automation, you just need a path to ISO. This being done, you need to update answer file. Well I know I’m creating answer file. Answer file is created in previous host. All you need is update the answer file with two variables which I mentioned above i.e. Server Name and IP Address. To get this done, I’m loading the XML file and updating the parameters as shown below. Once parameters are updated I’m saving the file

$xml = New-Object XML $xml.Load($xmlsourcepath) $xml.unattend.settings[1].component[7].Interfaces.Interface.UnicastIpAddresses.IpAddress.'#text'=$IPaddress $xml.unattend.settings[1].component[0].ComputerName=$VMName $xml.Save($xmlsourcepath)[/code]</pre>
Please note I have to cast a string into string. Apparently it is bug in powershell
$VMName=[string]$VMNamestr $IPaddress=[string]$IP 

Now task is to create a ISO file of an answer file and copy this answer file into datastore. Watch out, I have created the ISO file of same name as Server name (line 14 ). This will be helpful as same ISO cannot be attached to different virtual machine as XML file will have unique IP and servername.

Now create a additional CDROM on VM to attach answer file ISO. When you attach ISO to VM, you can only say “Connect at Power on” for additional CDRM but in order to actually connect it, it must be “Connected”. See below what i meant.


So I  attached the ISO and clicked the checkbox “Connect at power on”. Now when I power on the virtual machine, I get this additional CDROM in connected state. But by this time OS is already booted and boot process initiated. As workaround, I’m resetting VM after 5 seconds (Line no:11). This trick fixed the issue.

New-CDDrive -VM $VMName
Start-VM -VM $VMName -Confirm:$false
#attach ISO to datastore
Get-CDDrive -VM $VMName -Name "CD/DVD drive 1"| Set-CDDrive -IsoPath $ISO -StartConnected:$true -Confirm:$false
Get-CDDrive -Name "CD/DVD drive 2" -VM $VMName | Set-CDDrive -IsoPath "[PhyStorage]\ISO\$VMName.iso" -StartConnected:$true -Confirm:$false
#check if CDROM is connected, if not connect it.
$Cstates=Get-CDDrive -VM $VMName 
foreach($Cstate in $Cstates){
if($Cstate.ConnectionState.Connected -eq $false){
Get-CDDrive $Cstate.Parent -Name $Cstate.Name | Set-CDDrive -Connected:$true -Confirm:$false
Start-Sleep -Seconds 5
Restart-VM -VM $VMName -Confirm:$false

Here is full code

#Purpose is to create Virtual machine, attach OS ISO File, create Answer file, create ISO of answer file
#add secondary CDROM and attached
Add-PSSnapin -Name *vmware*
Connect-VIServer -User servera09@shsee.com -Password VMware1!
$VMNamestr=read-host "Enter the name of virtual machine"
$IP=read-host "Please enter IP for this Machine"
#casting into strings
#VM Details
#Removed xml from 
Remove-Item $xmldestination\*.xml
####Virtual Machine is created######
New-VM -Name $VMName -Datastore $Datastore -DiskGB $diskinGB -MemoryGB $RAMinGB -GuestId $GuestOS -NumCpu $vCPU -ResourcePool Resources -Version v8 -CD
#------------------------------------------------updated Answer File---------------------------------------------------------------------------------------#
$xml = New-Object XML
Copy-Item $xmlsourcepath $xmldestination
& 'C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg\oscdimg.exe' -n $xmldestination $answerISO

#copy ISO to datastore
Copy-DatastoreItem -Destination $answerISODestination -Item $answerISO
#add additional CDROM for Answer file
New-CDDrive -VM $VMName
Start-VM -VM $VMName -Confirm:$false
#attach ISO to datastore
Get-CDDrive -VM $VMName -Name "CD/DVD drive 1"| Set-CDDrive -IsoPath $ISO -StartConnected:$true -Confirm:$false
Get-CDDrive -Name "CD/DVD drive 2" -VM $VMName | Set-CDDrive -IsoPath "[PhyStorage]\ISO\$VMName.iso" -StartConnected:$true -Confirm:$false
#check if CDROM is connected, if not connect it.
$Cstates=Get-CDDrive -VM $VMName 
foreach($Cstate in $Cstates){
if($Cstate.ConnectionState.Connected -eq $false){
Get-CDDrive $Cstate.Parent -Name $Cstate.Name | Set-CDDrive -Connected:$true -Confirm:$false
Start-Sleep -Seconds 5
Restart-VM -VM $VMName -Confirm:$false

[AHV] Automation of Windows Server 2012 R2 using Powershell, AnswerFile and ACLI

Last week I shared my learnings on building Answer file. I want to share it for a reason as this post is built on it. As of now we have beginners understanding on how to create answer file which is sufficient to understand this post. Lets look at the way how to bring automation of Standard Operation Environment (SOE). My goal is to have OS Automation. At basic level, Hostname and IP Address are the bare minimum thing must be unique for the Server. For this post I’m focusing on how to automate these two parameters.

First the credits

  1. Jon (next.nutanix.com)
  2. Derek for his post on integrating VirtIO Drivers

Initially I aimed to achieve automation using powershell (Nutanix cmdlets) however information available in the help is very limited. Here is the approach, update the answer file, create ISO of answer file, upload Answer file ISO, attach OS ISO to the VM, attach Answer ISO file to VM and boot VM. For this automation I have already uploaded OS ISO.

Below is the work flow


Except for Task of uploading ISO file and ensuring VM reads the answer file while booting, all other tasks were quite easy to achieve using powershell.

So I have to focus on script which will provide me a way to upload an answer file to AHV using cmdlets and attach it to VM. I figured out first part but second part was definitely was not coming through and at which point I reached out Jon. Jon dropped an simple hint and I realized what needs to done. Let me explain the complete script below.

First step is updating answer file. I already have standard answer file (refer previous post), all I need is ensure that it is unique per VM. Servername and IP Address are bare minimum unique attributes of any Server. So for answer file, I have provided two user inputs (highlighted line 6,7). These inputs, I’m further using it to create VM Name. After answer file is created I’m using OSCDIMG.exe tool to create answer file as shown below (highlighted line 19). This answer file, I’m copying to a location which is my webserver’s virtual directory.

Add-PSSnapin -Name NutanixCmdletsPSSnapin
Connect-NTNXCluster -server -UserName admin -AcceptInvalidSSLCerts -ForcedConnection
$VMNamestr=read-host "Enter the name of virtual machine"
$IP=read-host "Please enter IP for this Machine"
#casting into strings
Remove-Item $xmldestination\*.xml
$xml = New-Object XML
Copy-Item $xmlsourcepath $xmldestination
&'C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg\oscdimg.exe' -n $xmldestination $answerISO

Line 2 I have not put password, you’re prompted for secure string password. I have to use -ForcedConnection switch as version of AHV and Nutanix cmdlet do not match. AcceptInvalidSSLCerts switch is needed when you are using self-signed certificates.

Why Am I using webserver? Well, I’m yet to figure out how to upload files into AHV using powershell. My script is using URL (highlighted line 4). Below is the snippet of the code. Here I’m intelligently putting name of ISO as name of the VM (highlighted line 6).

$imgCreateSpec = New-NTNXObject -Name ImageImportSpecDTO
New-NTNXImage -Name $VMName -Annotation "$VMName Answer File" -ImageType ISO_IMAGE -ImageImportSpec $imgCreateSpec 
start-sleep 10 
.\plink.exe nutanix@ -P 22 -pw nutanix/4u /tmp/createvm $VMName 

It is worth to know that ISO file in AHV are referred using UID and not name. This means you can have more than one ISO with same name.

Now that we have uploaded the ISO, all I need is to create VM with required specification and attach ISOs. I have kept standard specification 2vCPU, 4 GB RAM and 18 GB Storage. It can be easily changed if you wish to.  As mentioned above creating VM is quite simple in powershell but attaching ISO is not straight forward. In order to make things easier, I used plink.exe. Plink.exe will allow to run linux command remotely (shown above, highlighted line 6). I have created a script in controller VM and executed remotely. What is in the script? I must commend the Nutanix Dev team for creating such an intelligent scripting platform. You don’t have to learn anything except to know when to press Tab 🙂

/usr/local/nutanix/bin/acli vm.create $1 memory=1G num_vcpus=1
/usr/local/nutanix/bin/acli vm.nic_create $1 network=11
/usr/local/nutanix/bin/acli vm.disk_create $1 bus=scsi create_size=18G container=sss
/usr/local/nutanix/bin/acli vm.disk_create $1 cdrom=true clone_from_image=AHV
/usr/local/nutanix/bin/acli vm.disk_create $1 cdrom=true clone_from_image=$1
/usr/local/nutanix/bin/acli vm.on $1

AHV is name of ISO file I created using OSCDIMG.exe and following derek’s blog. $1 is argument. If you boot Windows2012R2 with VirtIO Tools, SCSI card, Nicard are not detected. VirtIO is vmware tools in AHV world. I found his blog post extremely helpful to integrate VirtIO driver. There is little error in his bat file batch and quite relevant if you use Index=3 or 4.

NB: I have stored this file in tmp directory, however if node is restarted this file is deleted.

Conclusion: It appears quite surprising and to most extend pleasing to have started with one intention in mind and ended up attaining completely different goal. Surprising part is I want to do mass provisioning of Windows 2012R2 in AHV but during my researching I realize it is simple as anything with Nutanix is. So I choose to focus on automation. The pleasing part is I learnt how answer file works and used that knowledge to build completely automated installation of Windows 2012R2 on AHV. Happy Learning. Below is the full powershell script for your reference. Shell script is already posted above. next post :), do same thing in VMware world.


Add-PSSnapin -Name NutanixCmdletsPSSnapin
#$pwd=read-host "Enter password for nutanix cluster" -AsSecureString
Connect-NTNXCluster -server -UserName admin -AcceptInvalidSSLCerts -ForcedConnection
$VMNamestr=read-host "Enter the name of virtual machine"
$IP=read-host "Please enter IP for this Machine"
#casting into strings
Remove-Item $xmldestination\*.xml
$xml = New-Object XML
Copy-Item $xmlsourcepath $xmldestination
& 'C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg\oscdimg.exe' -n $xmldestination $answerISO
$imgCreateSpec = New-NTNXObject -Name ImageImportSpecDTO
New-NTNXImage -Name $VMName -Annotation "$VMName Answer File" -ImageType ISO_IMAGE -ImageImportSpec $imgCreateSpec
start-sleep 10
.\plink.exe nutanix@ -P 22 -pw nutanix/4u /tmp/createvm $VMName

My Learnings on Sysprep, Answerfile and Mass Deployment -Post01

I started with a aim to find a information on how to mass deploy windows 2012R2 on AHV and end up learning whole lot of things. I want to know how can we clone VMs in AHV i.e. Acropolis Hypervisor. Well there are multiple ways of it. I want to talk about the one which is relevant to AHV. I will explore the other options via this series of posts.


Create OSE (operating system environment) based on windows 2012 R2 with following features

  1. Automatic partition of windows OS
  2. Automatic selection Windows 2012 R2 Standard Edition
  3. Automatic addition of Windows Server to domain
  4. Automatic creation of one local user id with admin priviliges
  5. Automatic enabling Remote desktop
  6. Automatic configuration of time zone
  7. Automatic disabling of Enchanced I.E. security features for Administrators
  8. Automatic disabling Welcome to Server Managed at logon
  9. Automatic configuration of powershell to executionmode=remotesigned
  10. Automatic installation of RSAT tools and Telnet client

List doesn’t end here

In order achieve it, you must know how to create an answer file. Answer file creation process is explained in all over places. But I didn’t found a simple post about it. First and foremost you need a Windows Assessment and Deployment Kit (Windows ADK) for Windows 8.1 Update. It is here. Download and install it. Installation file is just under 1.5 MB. Install it and it will further ask you following question.


Select appropriate choice of yours. I choose to install on same PC, so I left the default selection and press Next, Next and selected only deployment tools.


Post installation, you need a take a trouble to find where is Windows System Image Manager. I prefer you create a shortcut on taskbar. Now you need the ISO. You can’t use evaluation version, you must have a ISO which is licensed. You can either mount the ISO or extract the ISO. I would prefer to extract. Create a directory of your choice. Mine is workingdir as shown below. After ISO is extracted go to the path shown below. 


Copy install.wim into WorkingDir folder. Open Windows System Image Manager, open install.wim file by going to Windows Image, right click


You will get a prompt as shown below, select the Edition of operation system.


It is will prompt to create catalog. Just say “yes”. It will take ample time to create catalog.


Now to create new answer file, click as shown below


To complete answer file you need add various components shown above. This is very meat of entire post. Loads of options are available, which one to choose and what to fill is very important.let’s First add Microsoft-Windows-International-Core-WinPE this is basically going to automate default language, locale, and other international settings.


After you add it to pass 1, fill in the details. If you are getting lost, just use Help, it is excellent source of information.


Then add Microsoft-Windows-Setup component it contains settings that enable you to select the Windows image that you install, configure the disk that you install Windows to, and configure the Windows PE operating system. Now this has lots of stuff. Let’s start from top to bottom. There is nothing in DiskConfiguration to configure other than shown below


Right click on DiskConfiguration and Insert New Disk. For Disk0 we will wipe it as configured below.


After disk is wiped, you need to create and define partition. All our SOE will have 80 GB drive just for installing Guest OS and basic softwares e.g. AV, monitoring agents, VMware Tools and etc. No applications.  We will create two partitions, one for system and other for windows.


System partition will be 350 MB in size and has to be non-extending.


similarly windows partition will be set to extending true and will be second partition


If you are installing Windows to a blank hard disk, you must use the CreatePartitions and ModifyPartitions settings to create and format partitions on the disk


Make partition1 active and it will be label as System. Order 1 suggest it will be first created


Now Partition2 where OS will be installed will be label Windows and will be assigned Drive C:\



Now lets move to ImageInstall, ImageInstall specifies the Windows image to install and the location to which the image is to be installed. InstallFrom doesn’t applies in ISO installation, so skip it. You must specify either the InstallTo or the InstallToAvailablePartition settings (shown below)





However we need to specific installation path for Image and therefore we need to add MetaData


Finally you must  specific InstallTo e.g. Disk0 and Partition2, it where you will install Operating System


Task 1, 2 are achieved


In this screen, we will add EULA and skip product key as I don’t have valid product key. You can use license keys mentioned here.



I’m skipping name of the computer.  As I don’t believe putting computer name in answer file is a recipe for mass deployment. I will explore this option in future post.

4 Specialize

Add Microsoft-Windows-Shell-Setup to specialize Pass.

we need to add same key again in 7 oobe System but options are completely different which you will observed


Enter Name of the organization, Registered Owner and Time zone as shown above. Task 6 is achieved

Add Microsoft-Windows-IE-ESC in Pass 4 and enter False of IEHardenAdmin and True(which is default) for IEHardenUser. Task:07 is achieved


Add Microsoft-Windows-ServerManager-SvrMgrNc in Pass 4 and enter True for DoNotOpenServerManagerAtLogon. Task:08 is achieved


Add Microsoft-Windows-UnattendedJoin in Pass 4 and edit JoinDomain name shown below. Next add Identification specifies credentials to join a domain. Task3 is achieved.


Use either Provisioning or Credentials to join an account to the domain.


Add Microsoft-Windows-TerminalServices-LocalSessionManager in Pass 4 and edit False for fDenyTSConnections to remote desktop and below to open firewall port. Task 5 is achieved.


Add Networking-MPSSVC-Svc in Pass 4 to add remote desktop group. You must add firewall group as shown below. You must insert firewall group to enable or disable firewall for. To achieve Task 5



Now let’s provide IP Address to VM, I don’t believe IP Address should be part of unattend.xml. It is the property which changes per VM and it should be dynamic. I have a post reserved for it. It will be coming soon. For sake of this post let’s complete the parameters. Drag wow64_Microsoft-Windows-TCPIP component into Answer file shown below.


In the interface tab, right click and create Insert New Interface.


In the Interface type Identifier. This identifier is “Ethernet” you can’t say Local Area Connection here. It has to be Ethernet.


Below in Ipv4Settings, Don’t touch anything here as everything here is optional.


Then there is Routes, It is for providing gateway details. Right click Routes and Insert New Route.


You can say any number for integer. It is of little use here. Leave Metric blank. NextHopAddress should be default gateway. Prefix for should be  



Finally Unicast IP Address which is IP Address of the VM. Right click and select Insert New IP Address. Key is 1 and value is IP Address as shown below.


7 oobe System

Add Microsoft-Windows-Shell-Setup to oobe pass to enable autologon as shown below


Create a local user and give him administrator rights as shown below. Task 4 is achieved



For every account you create you must add password value as shown above

Now final piece, FirstLogonCommands. These commands are made to run when you have enabled autologon for administrator. These commands run under administrator privileges.  I have selected Synchronous command and provided the order in which they should run. I’m using Powershell to install RSAT tool and Telnet tools. And in second command I’m changing powershell execution mode to remotesigned. Both commands I have copied and pasted for better visibility.


%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command Import-Module ServerManager; Add-WindowsFeature RSAT-Role-Tools; Add-WindowsFeature RSAT-DNS-Server; Add-WindowsFeature Telnet-Client


%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command set-executionpolicy remotesigned -force >> C:\Users\Public\Documents\setExecution.log


Task 9 & 10 is achieved.At this stage answer file is ready.

Few tips

  1. Select Sensitive data to hide password.


  1. Domain Join password doesn’t get encrypted. You need to find a workaround for it. It is my next post.
  2. Every time you save answer file it is by default validated.

Attaching answer file

Answer file can be attached using

  1. USB drive
  2. External disk
  3. CDROM Image

For AHV, I have yet to figure this out. But there are posts around which advocate burning unattended file directly on Windows CD or inserting into Windows ISO. Both approach are not  scalable.  XML file will be unique per VM, so you need to look at the mechanism how to ensure XML file is generated & Unique for each VM without much hassle and same file much be seamless attached as CDROM/made visible to boot process.

For this post I’m going to use inbuilt tool which is oscdimg.exe. This exe is part of Windows AIK and located in

C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg folder.

Save a xml file to some folder. In my case I created a folder Answer and copied unattended file into it as shown below.


run following command

oscdimg.exe -n c:\Answer c:\ans999.iso


That is all. Attach answerfile.iso to AHV and boot VM and it should read the answer file. Only caveat, you have to attach additional CDROM to the VM and ensure it is second IDE device and not first. First IDE device is used to boot from ISO.


Advertise & Reserved Capacity in Containers in Nutanix

This blog would be third part of previous posts here and here. In this blog post I would focus on what are the use cases of using reserved capacity and advertise capacity. I have created super simple Visio below and has three examples. All nodes have same configuration.  Single appliance with four nodes. Each node has two SSD and four HDD. Each node gets 10TB usable capacity when configured in cluster.  Bottom one is default configuration.  Here for sake of discussion we have created single storage pool and created three containers. By default each container gets 40 TB of storage. All containers are thin provisioned and space is consumed on first cum first basis.


Middle example explains reserve capacity. Three containers are created. First container I have reserved 10TB, second and third containers are created with default configuration. As result of reserved capacity, containers 2 and 3 have now only 30TB usable capacity, while container 1 has 10 TB reserved capacity.  Reserved capacity can be extremely helpful in case you need to charge the customer on basis on allocation model and pay as go model. You can allocate 40 TB for customerA in container 1 and reserve 10 TB for him. You could charge customerA for upfront 10 TB and he can pay for 30 TB as and when he uses it. As storage pool is shared, there is no guarantee that storage will be available post 10 TB usage

Top example is of advertise capacity. Advertise capacity is interdependent on reserved capacity. You cannot reserve 10 TB and advertise 15 TB. Hypervizor cannot see and use anything beyond advertise capacity. In our example I have advertise only 2TB, container 1 & 2 will see 40  TB as maximum usable capacity, while container 3 will see only 2 TB maximum usable capacity. Hope this is clear now.

Let’s move to remains part of the previous blog. I would to share the mind map I have created for compression, deduplication and Erasure Coding features. I have used Edraw mindmap software. It is excellent if you wish to add style to your mindmaps.

In deduplication mindmaps it is worth to mention there two tiers where deduplication occurs I.e. Performance tiers and capacity tiers. And each tier has some requirements to meet.


In compression mindmap what compression library is used and use cases for compression.


In Erasure Coding, map explains what is made of strip and draw bags of having bigge strip size and increase you restore time. Give a special attention to prerequisites and recommendation section which I have mentioned I will share in this post.


Hope you find it useful.

Use cases for creating Multiple Containers in Nutanix

In the last post I discussed what are various consideration available at hand to choose Nutanix node for our business use case. This post is further extension of it. In this post I would like to  describe various use cases of creating containers. First and foremost we must understand what is container. Nutanix documentation explains containers are logical extension of storage pool. All containers are backed by single pool. If you have a storage pool of 10 TB and you create 2 containers out of this storage pool, each container will have 10 TB storage space. This explains the logical extension. By default all containers are thin provisioned. So you don’t need to create thin provisioning at hypervizor layer. Now this is the reason I insist you wear storage administrator thinking hat. Normally storage admin would present either thick or thin LUNs to the hypervizor. I can imagine a question popping up in your mind how you do it. I have address this in next post.

Nutanix recommends creating single pool and single storage container to dynamically optimise the distribution of resources like capacity and IOPS. However you will always find a need to create more than one container. What are those needs?

Answer to any of the below question means you need more than one container

  1. Do you need more RF=3 for some application?
  2. Do you need to enable compression feature for some applications?
  3. Do you need to enable different deduplication policy at storage level?
  4. Do you need Erasure Coding along with RF for some applications?

Answer to any of the question suggests you need more than one container. Before enabling any of the features you need to know what these features are and its use cases are and as an designer what would be its impact.

Redundancy Factor (RF)

If you need to protect some applications at RF=3, it means two nodes of nutanix can fail simultaneously but applications won’t be impacted. Storage admin cap please. Here again we are talking about data protection and not about VM protection. In other words it means if you have some container at RF=2 and more than one more node fails there is potential that some extents of VM are unavailable. RF=3 is very unique business case. May not apply in all cases and there are prerequisites to meet. Refer Visio below.


In Nutanix terminology it is referred as mapreduce compression. Data is compressed but when? Well you compressed it as it is written or after it is written. What is good for us? Storage admin hat? Not really. You need  hypervizor admin hat this time.

In-line Compression

If you want your data to be compressed as it is written, you must know the nature of the data which is written on the storage. Nutanix document states, use it for sequential workloads e.g. Hadoop, data analytic. Database Log files by very nature are sequential in nature, it might occur to you as a good candidate. It can be as long as it is not compressed natively. Nutanix recommends not to use compression feature in cases where data is natively compressed.

Post Compression

Data is compressed after it is written to the disk. This disk is capacity tier. Nutanix recommends to use post compression where there is write once and read frequently data accessed. E.g. Home folder, archiving & backup solution.


This is must for any storage pool. By all means a popular feature.

In-line deduplication

Data is duplicated as and when it is written to the disk. In Nutanix term it is referred as fingerprinting. What is the best data which can make maximum out of this? Persistent desktop, full clones and P2V. Biggest advantage you get in in-line deduplication is maximum space in performance tier. Performance tier is made up of SSD+Memory. In storage world it means write cache (sold at premium) by storage vendors. So you can understand its impact the moment you think from storage admin perspective.

On-disk deduplication is mostly focused on capacity savings.

Erasure Coding (EC)

EC gives  you capacity saving over and above compression and deduplication. Strongly recommended when space optimization is your goal or you are scared of losing 50% of capacity. However  It is mythical, refer Josh post on it. Good use cases is again file servers, backup and archival. You could relate compression and EC go hand in hand.

Where not to use these features

  1. Do not use compression where applications are natively compressing data. E,g JPEG, Database, Heavy random writes, Frequent overwrites. Similar do not use EC feature where there are frequent over writes. Rate of space savings return diminishes as cluster size increases beyond 6 nodes. EC has some prerequisite which I’ve explained in mind map in my next post as prerequisite and recommendations.
  2. Deduplication strongly discouraged when using linked clones.


I tried to articulate my 855 words in Visio below. What are the prerequisites which is referred as WHAT. It means what you need to enable this feature. Worth noting it is licenses type you need.  For example on disk deduplication, EC, RF=3 and post compression you need Pro license. WHY: denotes why you need this feature.  I have discussed Reserved Capacity and Advertise capacity in next post


Hope you find it useful.

Related Posts :-http://www.vzare.com/?p=4451

Nutanix Appliances, Node and Data point to select right one for your needs

In any architecture everything starts with requirements. Gathering requirements and mapping those with particular technology is one of the main task of Architect. In this post I’m going to talk about what all things you need to map with Nutanix nodes and its other features. Below is very extensive visio diagram about various Appliance (and Nodes) available in Nutanix offering. On left on hand side there is business requirement. It could be SMB, Intensive workload, Business or Mission critical. You might be aware there is similar offering from Dell and most recently from Lenovo. For this post i’m sticking to offering made by Nutanix.  Before I dive into how I have built this swim lanes I would like to provide you references which I have used.


[1]Online Plus course: http://www.nutanix.com/services/education/#schedule. There is discount going on for this course (PromoCode:APRIL16)

[2]Sizing Guide: http://designbrewz.com/

[3]Nutanix Specification Sheet: http://go.nutanix.com/rs/nutanix/images/Nutanix_Spec_Sheet.pdf

I have kept blue color as base color in this sheet. If you see any different color, it suggest there is some other box of same color. Same color denotes they have same properties e.g. CPU/Network/Memory. e.g. NX-8150 has 12, 20 cores processors which are also offered in NX-8035-G4

You should start from left hand side. There is Business use case mentioned. let’s take NX9000, most simplest example to understand the flow of this diagram.

Mission critical application, Consistent low latency

In technical words it is All flash model. It is referred as NX9000 series. NX9060-G offers two processors  options (12, 24 cores) varying from 128 to 512 GB RAM, 6 SSD and 2 X 1 Gbps and 1 x 1 Gbps network cards. Till Network Cards column,  it is all you get in a single Node. Nutanix requires minimum three nodes and recommends minimum 4 nodes for RF2 and N+1. RF=Redundancy Factor.


So if you wish to know NX9000 series can give how much Cores, Memory, Storage, now you have this information handy. NX9000 Appliance (NX9060-G4) series comes with 4 nodes. For 4-Nodes, you can either get 48 cores or 96 cores with 512 GB minimum to 2048 Maximum. Usable storage space calculated from Sizing Guide [2] based on RF-2. So you have right information whether can host your workload on single block or you would need more blocks.

I have also mentioned network and power consumption. Now this is really needed for datacenter guys to know how much power is going to be needed and how many ports needs to be patched. I also wish to include Rack space required per block. But in most case it is 2U, however there are some exceptions which I’m about to discuss below

Business Critical Application (Exch, SQL, SAP, Oracle)

Let’s take little complicated example. NX8000. Well most complicated is NX3000 series on which i have to yet to work on. That being said NX3000 is most popular model. When I say complicated it is not about model, it is about various offering they have and difficult it becomes to choose without having specific data. To make decision you need data. This is main idea of this blog post. You’ve data now to make a sound decision. Let’s talk about it.


As seen above, NX8000 has 3 models and each model at minimum offer 3 processor types. It is worth to mention NX8150 and NX8150-G4 are single node appliance, So in this case you need 4 appliances as per Nutanix recommendation for RF=2 and N+1. Since each appliance is 2U,  8U rack space is needed and you would need somewhere between 650-700 x 4 =2600 to 2800 W per Block. Please note I have quoted max power from Nutanix specification sheet [3]. Now let’s narrow down, if you choose NX8150, it has more networking offering. So you need more cables and more ports at switch side. Please note I have not included the add-on Network cards available. Now if you read from left side and come to decision to choose between NX8150 or NX8150-G4 you have several data points. First being cores you need, G4 offers anywhere between 16-36 cores options and power consumed is 650W. If your goal is consolidate more and optimize maximum power to performance ratio you can right away select G4. However on other hand your application is expected to be highly network intensive, then NX8150 could be your best bet. As you could see NX8150 has more network ports option available out of the box. If you have to choose between 8150 or 8150-G4, decision factors would be Maximum RAM, Maximum usable Storage and Network ports needed. And to repeat both are 2U  single node appliance.

Now let’s discuss 3rd option i.e. NX8035-G4, it is two nodes per appliance and power is just 950 Watt. So for 4 nodes your power is 1800 watt. Let’s see how closer you get to 36 cores. Max  28 cores option is available, . So you can choose NX8035-G4, go with 28 cores option great price to performance ratio. Are you loosing anything? yes , if your storage requirement is between 13 to 43 TB you are good, Anything above you will have to fall back to NX8150 model. This is just my way of looking and extracting value out of the VISIO I’ve created. Hope it helps you guys. In case you have noticed, we are talking in TB’s no longer in GB. I have attached two Page PDF. you can look any row in “Total Usable Storage per Block” it is TB.  Don’t afford to miss 8035 offers 4 different processors.


ProTip: If you work for service provider you will instantly understand the value of this. In service provider world, knowing these details enables us to provide most of the value to the customer in efficient and optimum way.

Download PDF Copy

[Nutanix Cluster] Configure email based Alert configuration

Before I explain about the features mentioned in the subject of the blog, I would like to show you the HTML report. Below report cannot be simpler anymore, do you agree?. It is snapshot of Nutanix cluster. It is another reason to create a single cluster as discussed in previous post, as more clusters more reports. Most fascinating part of the report “it is Out of the Box [OOB]” experience. It details cluster name, block ID, cluster version. Table below details the alert severity where it occurred e.g. Node/Block ID and describes in a very lucid manner the issue is and possible cause for it. This was cluster specific health check and titled as Email Daily Digest :



How to configure?

Below screen shows how to configure email alert. If you check “Email Daily Digest” you will get above report. Enter email ID to whom you wish to sent it. By default Nutanix support is added to the list. Isn’t it great? as long as that mailbox is monitored and Port:80/8443 is opened to send the alert to Nutanix support portal. Email Alert checkbox is something I will cover later. Hint: It is one of best alert mechanism I have seen so far.


Now same email configuration is pulled by NCC. NCC is Nutanix Cluster Check. Aakash Jacob has written a great article on it here. I strongly recommend you reading it. NCC however runs on every node in the cluster and therefore this report going to be little clutter based on number of nodes configured in the cluster. Have a look at screen capture of my Nutanix CE edition. It is clutter but loads of tips/tricks and KB reference on how to fix it. Unfortunately KB portal is only opened to customers and Partners. Hope it will change soon.


How to configure NCC to send alerts?

ncc –set_email_frequency=24

Reference: –http://nutanix.blogspot.ae/2015/01/ncc-swiss-army-knife-of-nutanix.html

just set the frequency as per your requirement. I have configured it to 24 hours below.


What time of the day you get these reports?

well I yet to find it out, as my timezone was set to PDT. I have changed it to GST. Tricky, Dubai is in Asia (to my surprise). I blame CentOS than Nutanix for it. So I learnt how to set a TimeZone on a cluster.


Do you know it is enable by default and you don’t need any further evidence to convince yourself why it is so. Start using it. Do note this is only for Nutanix Cluster and not of hypervisor. NCC checks are very critical and must be enabled.

Please note with hyper-converged you got to wear Hypervisor administrator and storage administrator “Thinking Hat”. You can quite easily mess up the data. More on it later.

Distrupting Datacenter