[NUTANIX] Internal CA Signed Certificate for console Access for Prism

SSL signed certificate are used to encyrpt communication between client and server. Signed Certificate ensures the server is authenticated. Self-signed certificates are not signed by 3rd Party and therefore cannot be fully trusted. For internal services, you can use internal Certificate Authority (Internal CA).Nutanix uses SSL to secure communication with a cluster and web console allows you to install SSL certificates.

Nutanix provides simplest way to configure SSL signed certificate to encrypt communication between console and server. You need Microsoft CA and openssl. Openssl can be downloaded from here. Installation of Microsoft CA is explained here. As with any step Certificate Signing Request (CSR) is first step.In order to create csr, you need openssl.cfg file. Following is the file I created. I used similar file for VMware Certificates.

[ req ]
default_bits = 2048
default_keyfile = rui.pem
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = DNS:sssnut, IP:, DNS:sssnut.shsee.com, DNS:NTNX-f8b67341-A-CVM, IP:, DNS:NTNX-f8b67341-A-CVM.shsee.com

[ req_distinguished_name ]
countryName = AE
stateOrProvinceName = AbuDhabi
localityName = ME12
0.organizationName = SHSEE
organizationalUnitName = Nutanix Services
commonName = sssnut.shsee.com

Pay special attention to line 14. Do note country codes are two letters only. I was using UAE, but was getting error while creating csr. For UAE, it is AE. Line 2 is the key length. Various key lengths supported by default. Do ensure CA you are configuring has at least 2048 key length. In cfg file I have edited only line 14, 17-22 only. Everything else remains default. After you have downloaded openssl from http://slproweb.com/products/Win32OpenSSL.html, extract as it to C:\ as shown. Take a backup of openssl.cfg.

You can refer my previous post of openssl.cfg file here

Run following command to create csr request. Do note rui.pem file is private key which is unique per request.


Browse to http://CertificateAuthorityFQDN/certsrv/

Upload CSR to Microsoft CA as shown below. Review Slide Share for detail steps


This is all needed.

Finally wish to Thank Marc for promoting my previous post. Believe me or not, post hit highest count so far. Power of social media

[VMware] Automation of Windows Server 2012 R2 using Powershell, AnswerFile

Last week I shared my learnings on building Answer file and automate Windows Server deployment on Acropolis Hypervsior [AHV]. This post is almost similar to earlier post, but it is based for deployment on VMware Platform. I really want to explain the code line by line. This would make post highly verbose. Let me keep it short and simple. You need to create a VM to install a Operation system. For Virtual machine you need a mandatory input e.g. vCPU, vRAM, Storage, GuestOS, Datastore and CD ROM (for my automation workflow you need two CDROM). After Virtual machine is created , attach Operating System ISO. My script assumes you already have ISO uploaded into datastore. Below is over all workflow


For automation, you just need a path to ISO. This being done, you need to update answer file. Well I know I’m creating answer file. Answer file is created in previous host. All you need is update the answer file with two variables which I mentioned above i.e. Server Name and IP Address. To get this done, I’m loading the XML file and updating the parameters as shown below. Once parameters are updated I’m saving the file

$xml = New-Object XML $xml.Load($xmlsourcepath) $xml.unattend.settings[1].component[7].Interfaces.Interface.UnicastIpAddresses.IpAddress.'#text'=$IPaddress $xml.unattend.settings[1].component[0].ComputerName=$VMName $xml.Save($xmlsourcepath)[/code]</pre>
Please note I have to cast a string into string. Apparently it is bug in powershell
$VMName=[string]$VMNamestr $IPaddress=[string]$IP 

Now task is to create a ISO file of an answer file and copy this answer file into datastore. Watch out, I have created the ISO file of same name as Server name (line 14 ). This will be helpful as same ISO cannot be attached to different virtual machine as XML file will have unique IP and servername.

Now create a additional CDROM on VM to attach answer file ISO. When you attach ISO to VM, you can only say “Connect at Power on” for additional CDRM but in order to actually connect it, it must be “Connected”. See below what i meant.


So I  attached the ISO and clicked the checkbox “Connect at power on”. Now when I power on the virtual machine, I get this additional CDROM in connected state. But by this time OS is already booted and boot process initiated. As workaround, I’m resetting VM after 5 seconds (Line no:11). This trick fixed the issue.

New-CDDrive -VM $VMName
Start-VM -VM $VMName -Confirm:$false
#attach ISO to datastore
Get-CDDrive -VM $VMName -Name "CD/DVD drive 1"| Set-CDDrive -IsoPath $ISO -StartConnected:$true -Confirm:$false
Get-CDDrive -Name "CD/DVD drive 2" -VM $VMName | Set-CDDrive -IsoPath "[PhyStorage]\ISO\$VMName.iso" -StartConnected:$true -Confirm:$false
#check if CDROM is connected, if not connect it.
$Cstates=Get-CDDrive -VM $VMName 
foreach($Cstate in $Cstates){
if($Cstate.ConnectionState.Connected -eq $false){
Get-CDDrive $Cstate.Parent -Name $Cstate.Name | Set-CDDrive -Connected:$true -Confirm:$false
Start-Sleep -Seconds 5
Restart-VM -VM $VMName -Confirm:$false

Here is full code

#Purpose is to create Virtual machine, attach OS ISO File, create Answer file, create ISO of answer file
#add secondary CDROM and attached
Add-PSSnapin -Name *vmware*
Connect-VIServer -User servera09@shsee.com -Password VMware1!
$VMNamestr=read-host "Enter the name of virtual machine"
$IP=read-host "Please enter IP for this Machine"
#casting into strings
#VM Details
#Removed xml from 
Remove-Item $xmldestination\*.xml
####Virtual Machine is created######
New-VM -Name $VMName -Datastore $Datastore -DiskGB $diskinGB -MemoryGB $RAMinGB -GuestId $GuestOS -NumCpu $vCPU -ResourcePool Resources -Version v8 -CD
#------------------------------------------------updated Answer File---------------------------------------------------------------------------------------#
$xml = New-Object XML
Copy-Item $xmlsourcepath $xmldestination
& 'C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg\oscdimg.exe' -n $xmldestination $answerISO

#copy ISO to datastore
Copy-DatastoreItem -Destination $answerISODestination -Item $answerISO
#add additional CDROM for Answer file
New-CDDrive -VM $VMName
Start-VM -VM $VMName -Confirm:$false
#attach ISO to datastore
Get-CDDrive -VM $VMName -Name "CD/DVD drive 1"| Set-CDDrive -IsoPath $ISO -StartConnected:$true -Confirm:$false
Get-CDDrive -Name "CD/DVD drive 2" -VM $VMName | Set-CDDrive -IsoPath "[PhyStorage]\ISO\$VMName.iso" -StartConnected:$true -Confirm:$false
#check if CDROM is connected, if not connect it.
$Cstates=Get-CDDrive -VM $VMName 
foreach($Cstate in $Cstates){
if($Cstate.ConnectionState.Connected -eq $false){
Get-CDDrive $Cstate.Parent -Name $Cstate.Name | Set-CDDrive -Connected:$true -Confirm:$false
Start-Sleep -Seconds 5
Restart-VM -VM $VMName -Confirm:$false

[AHV] Automation of Windows Server 2012 R2 using Powershell, AnswerFile and ACLI

Last week I shared my learnings on building Answer file. I want to share it for a reason as this post is built on it. As of now we have beginners understanding on how to create answer file which is sufficient to understand this post. Lets look at the way how to bring automation of Standard Operation Environment (SOE). My goal is to have OS Automation. At basic level, Hostname and IP Address are the bare minimum thing must be unique for the Server. For this post I’m focusing on how to automate these two parameters.

First the credits

  1. Jon (next.nutanix.com)
  2. Derek for his post on integrating VirtIO Drivers

Initially I aimed to achieve automation using powershell (Nutanix cmdlets) however information available in the help is very limited. Here is the approach, update the answer file, create ISO of answer file, upload Answer file ISO, attach OS ISO to the VM, attach Answer ISO file to VM and boot VM. For this automation I have already uploaded OS ISO.

Below is the work flow


Except for Task of uploading ISO file and ensuring VM reads the answer file while booting, all other tasks were quite easy to achieve using powershell.

So I have to focus on script which will provide me a way to upload an answer file to AHV using cmdlets and attach it to VM. I figured out first part but second part was definitely was not coming through and at which point I reached out Jon. Jon dropped an simple hint and I realized what needs to done. Let me explain the complete script below.

First step is updating answer file. I already have standard answer file (refer previous post), all I need is ensure that it is unique per VM. Servername and IP Address are bare minimum unique attributes of any Server. So for answer file, I have provided two user inputs (highlighted line 6,7). These inputs, I’m further using it to create VM Name. After answer file is created I’m using OSCDIMG.exe tool to create answer file as shown below (highlighted line 19). This answer file, I’m copying to a location which is my webserver’s virtual directory.

Add-PSSnapin -Name NutanixCmdletsPSSnapin
Connect-NTNXCluster -server -UserName admin -AcceptInvalidSSLCerts -ForcedConnection
$VMNamestr=read-host "Enter the name of virtual machine"
$IP=read-host "Please enter IP for this Machine"
#casting into strings
Remove-Item $xmldestination\*.xml
$xml = New-Object XML
Copy-Item $xmlsourcepath $xmldestination
&'C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg\oscdimg.exe' -n $xmldestination $answerISO

Line 2 I have not put password, you’re prompted for secure string password. I have to use -ForcedConnection switch as version of AHV and Nutanix cmdlet do not match. AcceptInvalidSSLCerts switch is needed when you are using self-signed certificates.

Why Am I using webserver? Well, I’m yet to figure out how to upload files into AHV using powershell. My script is using URL (highlighted line 4). Below is the snippet of the code. Here I’m intelligently putting name of ISO as name of the VM (highlighted line 6).

$imgCreateSpec = New-NTNXObject -Name ImageImportSpecDTO
New-NTNXImage -Name $VMName -Annotation "$VMName Answer File" -ImageType ISO_IMAGE -ImageImportSpec $imgCreateSpec 
start-sleep 10 
.\plink.exe nutanix@ -P 22 -pw nutanix/4u /tmp/createvm $VMName 

It is worth to know that ISO file in AHV are referred using UID and not name. This means you can have more than one ISO with same name.

Now that we have uploaded the ISO, all I need is to create VM with required specification and attach ISOs. I have kept standard specification 2vCPU, 4 GB RAM and 18 GB Storage. It can be easily changed if you wish to.  As mentioned above creating VM is quite simple in powershell but attaching ISO is not straight forward. In order to make things easier, I used plink.exe. Plink.exe will allow to run linux command remotely (shown above, highlighted line 6). I have created a script in controller VM and executed remotely. What is in the script? I must commend the Nutanix Dev team for creating such an intelligent scripting platform. You don’t have to learn anything except to know when to press Tab 🙂

/usr/local/nutanix/bin/acli vm.create $1 memory=1G num_vcpus=1
/usr/local/nutanix/bin/acli vm.nic_create $1 network=11
/usr/local/nutanix/bin/acli vm.disk_create $1 bus=scsi create_size=18G container=sss
/usr/local/nutanix/bin/acli vm.disk_create $1 cdrom=true clone_from_image=AHV
/usr/local/nutanix/bin/acli vm.disk_create $1 cdrom=true clone_from_image=$1
/usr/local/nutanix/bin/acli vm.on $1

AHV is name of ISO file I created using OSCDIMG.exe and following derek’s blog. $1 is argument. If you boot Windows2012R2 with VirtIO Tools, SCSI card, Nicard are not detected. VirtIO is vmware tools in AHV world. I found his blog post extremely helpful to integrate VirtIO driver. There is little error in his bat file batch and quite relevant if you use Index=3 or 4.

NB: I have stored this file in tmp directory, however if node is restarted this file is deleted.

Conclusion: It appears quite surprising and to most extend pleasing to have started with one intention in mind and ended up attaining completely different goal. Surprising part is I want to do mass provisioning of Windows 2012R2 in AHV but during my researching I realize it is simple as anything with Nutanix is. So I choose to focus on automation. The pleasing part is I learnt how answer file works and used that knowledge to build completely automated installation of Windows 2012R2 on AHV. Happy Learning. Below is the full powershell script for your reference. Shell script is already posted above. next post :), do same thing in VMware world.


Add-PSSnapin -Name NutanixCmdletsPSSnapin
#$pwd=read-host "Enter password for nutanix cluster" -AsSecureString
Connect-NTNXCluster -server -UserName admin -AcceptInvalidSSLCerts -ForcedConnection
$VMNamestr=read-host "Enter the name of virtual machine"
$IP=read-host "Please enter IP for this Machine"
#casting into strings
Remove-Item $xmldestination\*.xml
$xml = New-Object XML
Copy-Item $xmlsourcepath $xmldestination
& 'C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg\oscdimg.exe' -n $xmldestination $answerISO
$imgCreateSpec = New-NTNXObject -Name ImageImportSpecDTO
New-NTNXImage -Name $VMName -Annotation "$VMName Answer File" -ImageType ISO_IMAGE -ImageImportSpec $imgCreateSpec
start-sleep 10
.\plink.exe nutanix@ -P 22 -pw nutanix/4u /tmp/createvm $VMName

My Learnings on Sysprep, Answerfile and Mass Deployment -Post01

I started with a aim to find a information on how to mass deploy windows 2012R2 on AHV and end up learning whole lot of things. I want to know how can we clone VMs in AHV i.e. Acropolis Hypervisor. Well there are multiple ways of it. I want to talk about the one which is relevant to AHV. I will explore the other options via this series of posts.


Create OSE (operating system environment) based on windows 2012 R2 with following features

  1. Automatic partition of windows OS
  2. Automatic selection Windows 2012 R2 Standard Edition
  3. Automatic addition of Windows Server to domain
  4. Automatic creation of one local user id with admin priviliges
  5. Automatic enabling Remote desktop
  6. Automatic configuration of time zone
  7. Automatic disabling of Enchanced I.E. security features for Administrators
  8. Automatic disabling Welcome to Server Managed at logon
  9. Automatic configuration of powershell to executionmode=remotesigned
  10. Automatic installation of RSAT tools and Telnet client

List doesn’t end here

In order achieve it, you must know how to create an answer file. Answer file creation process is explained in all over places. But I didn’t found a simple post about it. First and foremost you need a Windows Assessment and Deployment Kit (Windows ADK) for Windows 8.1 Update. It is here. Download and install it. Installation file is just under 1.5 MB. Install it and it will further ask you following question.


Select appropriate choice of yours. I choose to install on same PC, so I left the default selection and press Next, Next and selected only deployment tools.


Post installation, you need a take a trouble to find where is Windows System Image Manager. I prefer you create a shortcut on taskbar. Now you need the ISO. You can’t use evaluation version, you must have a ISO which is licensed. You can either mount the ISO or extract the ISO. I would prefer to extract. Create a directory of your choice. Mine is workingdir as shown below. After ISO is extracted go to the path shown below. 


Copy install.wim into WorkingDir folder. Open Windows System Image Manager, open install.wim file by going to Windows Image, right click


You will get a prompt as shown below, select the Edition of operation system.


It is will prompt to create catalog. Just say “yes”. It will take ample time to create catalog.


Now to create new answer file, click as shown below


To complete answer file you need add various components shown above. This is very meat of entire post. Loads of options are available, which one to choose and what to fill is very important.let’s First add Microsoft-Windows-International-Core-WinPE this is basically going to automate default language, locale, and other international settings.


After you add it to pass 1, fill in the details. If you are getting lost, just use Help, it is excellent source of information.


Then add Microsoft-Windows-Setup component it contains settings that enable you to select the Windows image that you install, configure the disk that you install Windows to, and configure the Windows PE operating system. Now this has lots of stuff. Let’s start from top to bottom. There is nothing in DiskConfiguration to configure other than shown below


Right click on DiskConfiguration and Insert New Disk. For Disk0 we will wipe it as configured below.


After disk is wiped, you need to create and define partition. All our SOE will have 80 GB drive just for installing Guest OS and basic softwares e.g. AV, monitoring agents, VMware Tools and etc. No applications.  We will create two partitions, one for system and other for windows.


System partition will be 350 MB in size and has to be non-extending.


similarly windows partition will be set to extending true and will be second partition


If you are installing Windows to a blank hard disk, you must use the CreatePartitions and ModifyPartitions settings to create and format partitions on the disk


Make partition1 active and it will be label as System. Order 1 suggest it will be first created


Now Partition2 where OS will be installed will be label Windows and will be assigned Drive C:\



Now lets move to ImageInstall, ImageInstall specifies the Windows image to install and the location to which the image is to be installed. InstallFrom doesn’t applies in ISO installation, so skip it. You must specify either the InstallTo or the InstallToAvailablePartition settings (shown below)





However we need to specific installation path for Image and therefore we need to add MetaData


Finally you must  specific InstallTo e.g. Disk0 and Partition2, it where you will install Operating System


Task 1, 2 are achieved


In this screen, we will add EULA and skip product key as I don’t have valid product key. You can use license keys mentioned here.



I’m skipping name of the computer.  As I don’t believe putting computer name in answer file is a recipe for mass deployment. I will explore this option in future post.

4 Specialize

Add Microsoft-Windows-Shell-Setup to specialize Pass.

we need to add same key again in 7 oobe System but options are completely different which you will observed


Enter Name of the organization, Registered Owner and Time zone as shown above. Task 6 is achieved

Add Microsoft-Windows-IE-ESC in Pass 4 and enter False of IEHardenAdmin and True(which is default) for IEHardenUser. Task:07 is achieved


Add Microsoft-Windows-ServerManager-SvrMgrNc in Pass 4 and enter True for DoNotOpenServerManagerAtLogon. Task:08 is achieved


Add Microsoft-Windows-UnattendedJoin in Pass 4 and edit JoinDomain name shown below. Next add Identification specifies credentials to join a domain. Task3 is achieved.


Use either Provisioning or Credentials to join an account to the domain.


Add Microsoft-Windows-TerminalServices-LocalSessionManager in Pass 4 and edit False for fDenyTSConnections to remote desktop and below to open firewall port. Task 5 is achieved.


Add Networking-MPSSVC-Svc in Pass 4 to add remote desktop group. You must add firewall group as shown below. You must insert firewall group to enable or disable firewall for. To achieve Task 5



Now let’s provide IP Address to VM, I don’t believe IP Address should be part of unattend.xml. It is the property which changes per VM and it should be dynamic. I have a post reserved for it. It will be coming soon. For sake of this post let’s complete the parameters. Drag wow64_Microsoft-Windows-TCPIP component into Answer file shown below.


In the interface tab, right click and create Insert New Interface.


In the Interface type Identifier. This identifier is “Ethernet” you can’t say Local Area Connection here. It has to be Ethernet.


Below in Ipv4Settings, Don’t touch anything here as everything here is optional.


Then there is Routes, It is for providing gateway details. Right click Routes and Insert New Route.


You can say any number for integer. It is of little use here. Leave Metric blank. NextHopAddress should be default gateway. Prefix for should be  



Finally Unicast IP Address which is IP Address of the VM. Right click and select Insert New IP Address. Key is 1 and value is IP Address as shown below.


7 oobe System

Add Microsoft-Windows-Shell-Setup to oobe pass to enable autologon as shown below


Create a local user and give him administrator rights as shown below. Task 4 is achieved



For every account you create you must add password value as shown above

Now final piece, FirstLogonCommands. These commands are made to run when you have enabled autologon for administrator. These commands run under administrator privileges.  I have selected Synchronous command and provided the order in which they should run. I’m using Powershell to install RSAT tool and Telnet tools. And in second command I’m changing powershell execution mode to remotesigned. Both commands I have copied and pasted for better visibility.


%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command Import-Module ServerManager; Add-WindowsFeature RSAT-Role-Tools; Add-WindowsFeature RSAT-DNS-Server; Add-WindowsFeature Telnet-Client


%WINDIR%\System32\WindowsPowerShell\v1.0\PowerShell.exe -command set-executionpolicy remotesigned -force >> C:\Users\Public\Documents\setExecution.log


Task 9 & 10 is achieved.At this stage answer file is ready.

Few tips

  1. Select Sensitive data to hide password.


  1. Domain Join password doesn’t get encrypted. You need to find a workaround for it. It is my next post.
  2. Every time you save answer file it is by default validated.

Attaching answer file

Answer file can be attached using

  1. USB drive
  2. External disk
  3. CDROM Image

For AHV, I have yet to figure this out. But there are posts around which advocate burning unattended file directly on Windows CD or inserting into Windows ISO. Both approach are not  scalable.  XML file will be unique per VM, so you need to look at the mechanism how to ensure XML file is generated & Unique for each VM without much hassle and same file much be seamless attached as CDROM/made visible to boot process.

For this post I’m going to use inbuilt tool which is oscdimg.exe. This exe is part of Windows AIK and located in

C:\Program Files (x86)\Windows Kits\8.1\Assessment and Deployment Kit\Deployment Tools\amd64\Oscdimg folder.

Save a xml file to some folder. In my case I created a folder Answer and copied unattended file into it as shown below.


run following command

oscdimg.exe -n c:\Answer c:\ans999.iso


That is all. Attach answerfile.iso to AHV and boot VM and it should read the answer file. Only caveat, you have to attach additional CDROM to the VM and ensure it is second IDE device and not first. First IDE device is used to boot from ISO.


Advertise & Reserved Capacity in Containers in Nutanix

This blog would be third part of previous posts here and here. In this blog post I would focus on what are the use cases of using reserved capacity and advertise capacity. I have created super simple Visio below and has three examples. All nodes have same configuration.  Single appliance with four nodes. Each node has two SSD and four HDD. Each node gets 10TB usable capacity when configured in cluster.  Bottom one is default configuration.  Here for sake of discussion we have created single storage pool and created three containers. By default each container gets 40 TB of storage. All containers are thin provisioned and space is consumed on first cum first basis.


Middle example explains reserve capacity. Three containers are created. First container I have reserved 10TB, second and third containers are created with default configuration. As result of reserved capacity, containers 2 and 3 have now only 30TB usable capacity, while container 1 has 10 TB reserved capacity.  Reserved capacity can be extremely helpful in case you need to charge the customer on basis on allocation model and pay as go model. You can allocate 40 TB for customerA in container 1 and reserve 10 TB for him. You could charge customerA for upfront 10 TB and he can pay for 30 TB as and when he uses it. As storage pool is shared, there is no guarantee that storage will be available post 10 TB usage

Top example is of advertise capacity. Advertise capacity is interdependent on reserved capacity. You cannot reserve 10 TB and advertise 15 TB. Hypervizor cannot see and use anything beyond advertise capacity. In our example I have advertise only 2TB, container 1 & 2 will see 40  TB as maximum usable capacity, while container 3 will see only 2 TB maximum usable capacity. Hope this is clear now.

Let’s move to remains part of the previous blog. I would to share the mind map I have created for compression, deduplication and Erasure Coding features. I have used Edraw mindmap software. It is excellent if you wish to add style to your mindmaps.

In deduplication mindmaps it is worth to mention there two tiers where deduplication occurs I.e. Performance tiers and capacity tiers. And each tier has some requirements to meet.


In compression mindmap what compression library is used and use cases for compression.


In Erasure Coding, map explains what is made of strip and draw bags of having bigge strip size and increase you restore time. Give a special attention to prerequisites and recommendation section which I have mentioned I will share in this post.


Hope you find it useful.

Use cases for creating Multiple Containers in Nutanix

In the last post I discussed what are various consideration available at hand to choose Nutanix node for our business use case. This post is further extension of it. In this post I would like to  describe various use cases of creating containers. First and foremost we must understand what is container. Nutanix documentation explains containers are logical extension of storage pool. All containers are backed by single pool. If you have a storage pool of 10 TB and you create 2 containers out of this storage pool, each container will have 10 TB storage space. This explains the logical extension. By default all containers are thin provisioned. So you don’t need to create thin provisioning at hypervizor layer. Now this is the reason I insist you wear storage administrator thinking hat. Normally storage admin would present either thick or thin LUNs to the hypervizor. I can imagine a question popping up in your mind how you do it. I have address this in next post.

Nutanix recommends creating single pool and single storage container to dynamically optimise the distribution of resources like capacity and IOPS. However you will always find a need to create more than one container. What are those needs?

Answer to any of the below question means you need more than one container

  1. Do you need more RF=3 for some application?
  2. Do you need to enable compression feature for some applications?
  3. Do you need to enable different deduplication policy at storage level?
  4. Do you need Erasure Coding along with RF for some applications?

Answer to any of the question suggests you need more than one container. Before enabling any of the features you need to know what these features are and its use cases are and as an designer what would be its impact.

Redundancy Factor (RF)

If you need to protect some applications at RF=3, it means two nodes of nutanix can fail simultaneously but applications won’t be impacted. Storage admin cap please. Here again we are talking about data protection and not about VM protection. In other words it means if you have some container at RF=2 and more than one more node fails there is potential that some extents of VM are unavailable. RF=3 is very unique business case. May not apply in all cases and there are prerequisites to meet. Refer Visio below.


In Nutanix terminology it is referred as mapreduce compression. Data is compressed but when? Well you compressed it as it is written or after it is written. What is good for us? Storage admin hat? Not really. You need  hypervizor admin hat this time.

In-line Compression

If you want your data to be compressed as it is written, you must know the nature of the data which is written on the storage. Nutanix document states, use it for sequential workloads e.g. Hadoop, data analytic. Database Log files by very nature are sequential in nature, it might occur to you as a good candidate. It can be as long as it is not compressed natively. Nutanix recommends not to use compression feature in cases where data is natively compressed.

Post Compression

Data is compressed after it is written to the disk. This disk is capacity tier. Nutanix recommends to use post compression where there is write once and read frequently data accessed. E.g. Home folder, archiving & backup solution.


This is must for any storage pool. By all means a popular feature.

In-line deduplication

Data is duplicated as and when it is written to the disk. In Nutanix term it is referred as fingerprinting. What is the best data which can make maximum out of this? Persistent desktop, full clones and P2V. Biggest advantage you get in in-line deduplication is maximum space in performance tier. Performance tier is made up of SSD+Memory. In storage world it means write cache (sold at premium) by storage vendors. So you can understand its impact the moment you think from storage admin perspective.

On-disk deduplication is mostly focused on capacity savings.

Erasure Coding (EC)

EC gives  you capacity saving over and above compression and deduplication. Strongly recommended when space optimization is your goal or you are scared of losing 50% of capacity. However  It is mythical, refer Josh post on it. Good use cases is again file servers, backup and archival. You could relate compression and EC go hand in hand.

Where not to use these features

  1. Do not use compression where applications are natively compressing data. E,g JPEG, Database, Heavy random writes, Frequent overwrites. Similar do not use EC feature where there are frequent over writes. Rate of space savings return diminishes as cluster size increases beyond 6 nodes. EC has some prerequisite which I’ve explained in mind map in my next post as prerequisite and recommendations.
  2. Deduplication strongly discouraged when using linked clones.


I tried to articulate my 855 words in Visio below. What are the prerequisites which is referred as WHAT. It means what you need to enable this feature. Worth noting it is licenses type you need.  For example on disk deduplication, EC, RF=3 and post compression you need Pro license. WHY: denotes why you need this feature.  I have discussed Reserved Capacity and Advertise capacity in next post


Hope you find it useful.

Related Posts :-http://www.vzare.com/?p=4451

Nutanix Appliances, Node and Data point to select right one for your needs

In any architecture everything starts with requirements. Gathering requirements and mapping those with particular technology is one of the main task of Architect. In this post I’m going to talk about what all things you need to map with Nutanix nodes and its other features. Below is very extensive visio diagram about various Appliance (and Nodes) available in Nutanix offering. On left on hand side there is business requirement. It could be SMB, Intensive workload, Business or Mission critical. You might be aware there is similar offering from Dell and most recently from Lenovo. For this post i’m sticking to offering made by Nutanix.  Before I dive into how I have built this swim lanes I would like to provide you references which I have used.


[1]Online Plus course: http://www.nutanix.com/services/education/#schedule. There is discount going on for this course (PromoCode:APRIL16)

[2]Sizing Guide: http://designbrewz.com/

[3]Nutanix Specification Sheet: http://go.nutanix.com/rs/nutanix/images/Nutanix_Spec_Sheet.pdf

I have kept blue color as base color in this sheet. If you see any different color, it suggest there is some other box of same color. Same color denotes they have same properties e.g. CPU/Network/Memory. e.g. NX-8150 has 12, 20 cores processors which are also offered in NX-8035-G4

You should start from left hand side. There is Business use case mentioned. let’s take NX9000, most simplest example to understand the flow of this diagram.

Mission critical application, Consistent low latency

In technical words it is All flash model. It is referred as NX9000 series. NX9060-G offers two processors  options (12, 24 cores) varying from 128 to 512 GB RAM, 6 SSD and 2 X 1 Gbps and 1 x 1 Gbps network cards. Till Network Cards column,  it is all you get in a single Node. Nutanix requires minimum three nodes and recommends minimum 4 nodes for RF2 and N+1. RF=Redundancy Factor.


So if you wish to know NX9000 series can give how much Cores, Memory, Storage, now you have this information handy. NX9000 Appliance (NX9060-G4) series comes with 4 nodes. For 4-Nodes, you can either get 48 cores or 96 cores with 512 GB minimum to 2048 Maximum. Usable storage space calculated from Sizing Guide [2] based on RF-2. So you have right information whether can host your workload on single block or you would need more blocks.

I have also mentioned network and power consumption. Now this is really needed for datacenter guys to know how much power is going to be needed and how many ports needs to be patched. I also wish to include Rack space required per block. But in most case it is 2U, however there are some exceptions which I’m about to discuss below

Business Critical Application (Exch, SQL, SAP, Oracle)

Let’s take little complicated example. NX8000. Well most complicated is NX3000 series on which i have to yet to work on. That being said NX3000 is most popular model. When I say complicated it is not about model, it is about various offering they have and difficult it becomes to choose without having specific data. To make decision you need data. This is main idea of this blog post. You’ve data now to make a sound decision. Let’s talk about it.


As seen above, NX8000 has 3 models and each model at minimum offer 3 processor types. It is worth to mention NX8150 and NX8150-G4 are single node appliance, So in this case you need 4 appliances as per Nutanix recommendation for RF=2 and N+1. Since each appliance is 2U,  8U rack space is needed and you would need somewhere between 650-700 x 4 =2600 to 2800 W per Block. Please note I have quoted max power from Nutanix specification sheet [3]. Now let’s narrow down, if you choose NX8150, it has more networking offering. So you need more cables and more ports at switch side. Please note I have not included the add-on Network cards available. Now if you read from left side and come to decision to choose between NX8150 or NX8150-G4 you have several data points. First being cores you need, G4 offers anywhere between 16-36 cores options and power consumed is 650W. If your goal is consolidate more and optimize maximum power to performance ratio you can right away select G4. However on other hand your application is expected to be highly network intensive, then NX8150 could be your best bet. As you could see NX8150 has more network ports option available out of the box. If you have to choose between 8150 or 8150-G4, decision factors would be Maximum RAM, Maximum usable Storage and Network ports needed. And to repeat both are 2U  single node appliance.

Now let’s discuss 3rd option i.e. NX8035-G4, it is two nodes per appliance and power is just 950 Watt. So for 4 nodes your power is 1800 watt. Let’s see how closer you get to 36 cores. Max  28 cores option is available, . So you can choose NX8035-G4, go with 28 cores option great price to performance ratio. Are you loosing anything? yes , if your storage requirement is between 13 to 43 TB you are good, Anything above you will have to fall back to NX8150 model. This is just my way of looking and extracting value out of the VISIO I’ve created. Hope it helps you guys. In case you have noticed, we are talking in TB’s no longer in GB. I have attached two Page PDF. you can look any row in “Total Usable Storage per Block” it is TB.  Don’t afford to miss 8035 offers 4 different processors.


ProTip: If you work for service provider you will instantly understand the value of this. In service provider world, knowing these details enables us to provide most of the value to the customer in efficient and optimum way.

Download PDF Copy

[Nutanix Cluster] Configure email based Alert configuration

Before I explain about the features mentioned in the subject of the blog, I would like to show you the HTML report. Below report cannot be simpler anymore, do you agree?. It is snapshot of Nutanix cluster. It is another reason to create a single cluster as discussed in previous post, as more clusters more reports. Most fascinating part of the report “it is Out of the Box [OOB]” experience. It details cluster name, block ID, cluster version. Table below details the alert severity where it occurred e.g. Node/Block ID and describes in a very lucid manner the issue is and possible cause for it. This was cluster specific health check and titled as Email Daily Digest :



How to configure?

Below screen shows how to configure email alert. If you check “Email Daily Digest” you will get above report. Enter email ID to whom you wish to sent it. By default Nutanix support is added to the list. Isn’t it great? as long as that mailbox is monitored and Port:80/8443 is opened to send the alert to Nutanix support portal. Email Alert checkbox is something I will cover later. Hint: It is one of best alert mechanism I have seen so far.


Now same email configuration is pulled by NCC. NCC is Nutanix Cluster Check. Aakash Jacob has written a great article on it here. I strongly recommend you reading it. NCC however runs on every node in the cluster and therefore this report going to be little clutter based on number of nodes configured in the cluster. Have a look at screen capture of my Nutanix CE edition. It is clutter but loads of tips/tricks and KB reference on how to fix it. Unfortunately KB portal is only opened to customers and Partners. Hope it will change soon.


How to configure NCC to send alerts?

ncc –set_email_frequency=24

Reference: –http://nutanix.blogspot.ae/2015/01/ncc-swiss-army-knife-of-nutanix.html

just set the frequency as per your requirement. I have configured it to 24 hours below.


What time of the day you get these reports?

well I yet to find it out, as my timezone was set to PDT. I have changed it to GST. Tricky, Dubai is in Asia (to my surprise). I blame CentOS than Nutanix for it. So I learnt how to set a TimeZone on a cluster.


Do you know it is enable by default and you don’t need any further evidence to convince yourself why it is so. Start using it. Do note this is only for Nutanix Cluster and not of hypervisor. NCC checks are very critical and must be enabled.

Please note with hyper-converged you got to wear Hypervisor administrator and storage administrator “Thinking Hat”. You can quite easily mess up the data. More on it later.

Basic commands using ncli -101

In order to use ncli you must SSH using nutanix/nutanix/4u. Now if you can use any of CVM IP.

In my case I used Cluster IP ( and you can see below it is directing me to After logged in type in ncli to see prompt below


How many nodes in the cluster?

nodetool ring –h localhost


How to check cluster redundancy ?

cluster get-redundancy-state


How to check metadata store is enabled?Metadata store enabled on the node

ncli host list | grep “Metadata store enabled on the node”


login to CVM and issue cvm_shutdown 1


List down number of containers

container ls


List Virtualmachines

virtualmachine list


Find Cluster specific information. I have highlighted the important once.

Cluster info


Scale out storage and or compute in Nutanix -101

Nutanix allows you to scale out compute and storage independently. As I’m reading about Nutanix cluster, I was wondering do you really need to create a storage pool for each vSphere cluster. Pre-vSphere 6.0 days, vSphere cluster was considered vMotion boundary. Post vSphere 6.0 things changed especially cross switch migration, cross vcenter migration, cross cluster migration even when storage is not shared. From design perspective and to keep things simpler I would always recommend to have a cluster as vMotion boundary.

Nutanix do not have specific guidelines on it. As it is no longer constraint from storage side.  Nutanix DSF(Distributed Storage Fabric) is combination of nodes, hypervisor and storage tiers (SDD+HDD), it can scale to any limit. It is vSphere limit of 64 nodes which drives the scalability

Let’s discuss vSphere cluster and Nutanix Cluster. As we have defined vSphere cluster as vMotion boundary, in similar terms Nutanix Cluster is your storage boundary. If you create a Nutanix Cluster-A, VM machines provisioned remains within the storage pool. It may be migrated using Storage vMotion. Do we have to create multiple Nutanix cluster. In my opinion you don’t have to. Let’s discuss that below.

Below a single Nutanix cluster and Single vSphere cluster is created. It is one-to-one mapping between Nutanix cluster and vSphere cluster. We have to create container out of Storage pool. Container is logical representation of storage pool. Containers are the objects which are presented to vSphere as NFS mounts. Containers are nearly same as LUNs, as you choose which ESXi should see NFS mounts. So containers are presented to ESXi hosts. A good vSphere design would recommend atleast two datastores. One for ISO and other for VMs. You can create as many datastore as you wish to as long storage pool allows you. Irrespective of number of datastores you create, they is single storage pool below containers. Performance is not going to change as there is NO RAID involved in DSF. So i doesn’t make sense to create multiple datastore. Keep things simple where possible.


In case you wish to just to scale out storage, you simple add a storage node. Both Nutanix and Dell have storage only nodes (Nutanix Model: NX-6035c and Dell Model:XC730xd-12c)

As you can see below, one-to-one relation is not needed. You scale storage independently of compute. Very good blog from Josh is available on this subject.



As some point in your vSphere design process there may arise a need to create to multiple cluster. By vSphere definition cluster is vMotion boundary. So question which struggled me for a while how we are going to ensure storage is presented only to specific nodes as DSF is combination of nodes. Answer was not at all difficult one. What remains simple is storage pool. You create single storage pool and present containers to specific storage.

Below a single storage pool and three containers are created. One container is presented to all ESXi hosts in the cluster  for storaging ISO’s.

I will pause here a little. I remember how difficult this was to get storage Admins to present one LUN to 10 ESXi host forget, 100 of esxi host. They have to create different zones over above zones for storage. It was too complicate for them to pool all WWN numbers into single Zone. It was never accept as it was never good practice to present a single LUN to multiple servers. I have to make them understand this is not a problem for us. I have to put in writting in an email, accept risk. Loads of stress, loads of internal fighting and email exchanges. Here I take this decision, I implement it and of course I don’t need anyone to vouch or contest it. Nutanix made it damn simple.


Other containers for VMs you can specifically mount on ESXi hosts in a cluster to maintain cluster boundary from vSphere perspective.


  1. vSphere clusters and nutanix cluster are completely independent and scale at their own level
  2. Nutanix cluster spans multiple nodes that doesn’t mean you have to design vSphere cluster as per the Nutanix cluster
  3. Nutanix cluster is storage boundary which has no limits
  4. Nutanix cluster scaling allows simplicity in vSphere design, any addition of compute node increases compute plus storage. You vSphere administrator have choice of allocating storage any node in vSphere
  5. There is no downtime  involved anywhere in scaling out nutanix cluster.
  6. Multiple nutanix cluster, multiple storage pools and multiple containers might increasing operational overhead. Keep things simple which is central theme of Nutanix platform.