Netscaler Upgrades - GOTCHA

So tonight, I will be finally upgrading from Netscaler 11.0 to 12.0. Long story as to why we haven't. But anyway, one of the biggest gotchas of upgrading Netscalers is licensing. If you reboot a Netscaler box and it does not have the proper licensing, then you will get the oops you can't do anything when it comes back up.  I absolutely love this guy's blog because he lists how you can check your licensing before you make that epic mistake.

https://www.techdrabble.com/citrix/netscaler/23-check-netscaler-license-expiration-information-quickly-via-powershell

I'll post more tonight after I've done the upgrade.
[5 hours later]
So it was not as bad of an upgrade as I would have thought. I will post some screen shots on Monday. But some observations, VPX 11.0 actually did some weird things using the GUI upgrade method. I transferred the firmware on to the appliance within the nsinstall folder. But inside the GUI, it tried to append additional folder paths that didn't exist. This did not occur during the upgrade on the MPX.

Another observation is opening the GUI right after presents strange formatting. So remember to either clear your cache or close your browser windows after you do an upgrade. You will think you messed something completely up when you log in and the formatting is all over the place.

Another gotcha was we use some customization within the VPN portal. We added a placeholder for the user name field. This did not copy over. It gets overwritten during the upgrade. So if you've made any changes within the vpn/js folder, then you'll want to make sure you back it up.

Also, the SSL certificates are now structured differently. Don't have a complete panic when you upgrade and only a few of your certificates are listed. This is because the new SSL>Certificates only lists Server certificates. To view your CA certificates, then you go to SSL>CA certificates. I had a minor panic moment, but all is well. If you request that the machine reboot after a successful upgrade, then you will not get to see the success of the upgrade. You will just suddenly not get any feedback on the screen and it will appear stuck. Just refresh and you'll see that the appliance is down.

Some other things that are interesting about this version are the favorites! We all have the same sections we visit frequently inside the Netscaler GUI. Well now you can favorite those sections and quickly jump to them. Also, the interface is quite sleek. If you've already used Netscaler MAS 12.0 then you'll be familiar with the layout. I also like the check for updates section. I do not like the fact that clicking on node in HA will open the node for editing. Actually clicking on any item inside the interface opens the item instead of just selects it. I'm so used to clicking the item to select it and then clicking Add or Edit.  I like to select an item and then select Add because it duplicates the item allowing for edits to create a new item.

The "new" Adaptive TCP feature definitely seems promising. This was available in 11.1 starting with build 51.21, but was recommended that you at least be at 11.1 build 55.10.
Other requirements for Adaptive TCP (taken from Adaptive Transport Product Documentation):
  • XenApp and XenDesktop: Minimum version 7.16. 
  • VDA for Desktop OS: Minimum version 7.13.
  • VDA for Server OS: Minimum version 7.13.
  • StoreFront: Minimum version 3.9.
  • Citrix Receiver for Windows: Minimum version 4.7 (EDT and TCP in parallel require minimum version 4.10 and Session Reliability).
  • Citrix Receiver for Mac: Minimum version 12.5 (EDT and TCP in parallel require minimum version 12.8 and Session Reliability).
  • Citrix Receiver for iOS: Minimum version 7.2. 
  • Citrix Receiver for Linux: Minimum version 13.6 for Direct VDA Connections only and minimum version 13.7 for DTLS support using NetScaler Gateway (or DTLS for direct VDA connections).
  • Citrix Receiver for Android: Minimum version 3.12.3 for Direct VDA Connections only.
  • IPv4 VDAs only. IPv6 and mixed IPv6 and IPv4 configurations are not supported.
So for all of the times that I've done these things, I will say I enjoyed this upgrade way more than the others.  Hope this helps!!! Talk to y'all later.



Upgrading VCSA 6.5 to 6.5U1

Make sure you know the order for upgrading the vCenter Appliance.

If you are unfamiliar with the upgrade process, then VMware has been so kind to give us a chart.
https://kb.vmware.com/s/article/2147289

But the biggest gotcha to point out here is you have any PSC (Platform Services Controller) or SSO external appliances, these MUST be upgraded first. Do not get burned here and mess up your entire environment.

The entire upgrade took about 45 minutes where there were 2 geographical data centers with 2 linked instances of VCSA and 1 PSC.

I suggest using the web portal for the entire upgrade process [https://your_vcsa_or_psc:5480]

I did a full backup of the 2 VCSAs and PSC. I also took a snapshot for good measure. Took a few minutes for the download of the updates. After the updates are complete, you will notice your inability to manage anything through that VCSA. The VCSA will not reboot on its own.

Once the VCSA is rebooted, you may notice unknown health status when you login. This is caused by your browser cache so don't be alarmed. Just clear your cache and you should see all greens.

All in all, this was not a bad upgrade at all. Just make sure you take a look at the upgrade sequence as to not be up all night trying to unfubar your environment!!!

If you receive vSphere HA host status errors from any of the hosts after the upgrade, then it may be necessary to disable HA at the cluster level, then re-enable HA. You can follow several different forums for removing affected vibs or uninstalling the FDM agent. But ultimately, the fix for us was to disable/enable HA at the cluster.

If you are using other things like Site Recovery Manager or vSphere Replication, then make sure you upgrade those as well in the proper order. When upgrading the vSphere Replication appliance, make sure to shut the appliances down, then power them back on. This will ensure that any configuration changes that occurred during the upgrade are propagated to the appliance.  Once your vSphere Replication servers are upgraded, perform the upgrade on your SRM servers. The biggest thing to remember in the upgrade process is to not make any changes especially to the objects identified within VCSA. If you make this mistake, then you will have to remove the installations from VCSA. I once made the mistake of renaming my vSphere Replication name. This messed up SRM completely. I had to redo the installation in order to get it working again.

Well it's been real folks!!! Until next time!!!

Optimizing VMware vCenter Converter

Disclaimer: So first off, I will not promise that doing any of these things will actually make it run better for you. I will just say that doing these things has performed well in all the environments I've used.

Run the tool locally
  • This should be an automatic, well duh!!! Installing the converter locally onto the VM or physical machine will always perform faster than running it from a Converter Server. However, this limits you from being able to shut the source machine down after the converter completes.
    • Pros
      • Faster than any other method unless importing it via OVF in which case why are you reading this article??
      • No additional credentials to pass other than the one to the VMware infrastructure
    • Cons
      • Additional network security needed. This requires that the VM or physical hardware have access to the vCenter server and hosts. If you have a large network or a bunch of firewall rules, then this could be a pain.
      • Requires you to install the full version of the software and add additional tweaks for performance before converting.
      • Requires babysitting because you cannot use the tool to shutdown the machine after the final sync.
  • With a bunch of data to migrate (500GB+), this is definitely my favorite option.
Use a Converter Server
  • Using this method means that you don't have to worry about settings getting messed up. You simply set your favorite settings and then allow people to use the server to connect to all the VMs that need to be converted. This method creates a centralized management approach to all P2V and V2V projects.
    • Pros
      • Centralized management
      • Running logs of all previous conversions
      • One server to introduce to your firewall rules
    • Cons
      • Not as fast of speeds that can be seen when the tool is run locally
      • Can sometimes fail more often due to the extra hops
      • Two credentials are needed, one for the server and one for the VMware environment
  • With 500GB or less, this is the best option for doing multiple migrations at once and having a centralized place to monitor the progress.
Performance Enhancements
  • Set Data connections per task to Maximum [This allows multiple drives to copy at once]
  • Turn off SSL for the Worker service
    • Go to C:\ProgramData\VMware\VMware vCenter Converter Standalone
    • Edit the converter-worker.xml
    • Find this line <useSsl>true</useSsl> and change it to false
    • Save the file and restart the worker service.
  • In another blog, I talked about things to do when converting from Xen to VMware. 
  • Make sure that all network connections are solid
  • If there's a 500GB disk and only 50GB of disk has data, then consider using file copy instead of block copy. Only use file copy when a good majority of the disk is free space otherwise you will hate yourself.
Well that's all I have for now. Hopefully this helps someone save some time in their migrations. 

XenServer to VMWware Migration

So I'm sure many of you have read all the tutorials and all the forums that say make sure to uninstall the tools before converting a VM from XenServer to VMware. Okay, so maybe they're right. If you want to cleanly move a Xen VM to VMware, then yes uninstall XenTools. However, if you want the converter to move a whole lot faster??????

DON'T UNINSTALL XENTOOLS!!!!

Ok, let's back up and explain why they say you should uninstall the tools. There's really only one primary reason for doing so...DRIVERS.  By uninstalling the tools, you force the VM to install the normal Windows drivers for the hardware and OS that it is sitting on. With the tools installed, you have all the Xen drivers which help it do things like Xenmotion or safely shutdown/restart the OS.

So I'm sure you are still waiting for me to tell you why you should actually keep XenTools installed on the VM. Well if you've ever used VMware Converter to convert a machine and you've uninstalled the tools, then you likely saw the migration dip to lightning speeds of 1KB/sec. On a VM with upwards of 100GB of data, you will be able to go get dinner, walk the dog, build 3 hosts, and possibly take a mini vacation before the VM is done. In the field, we saw a VM with about 500GB of data take a week to do the first initial synchronization and this was after it failed twice. The reason for the slowness is because the native drivers do not give the same speeds. In our environment, it was a difference between 100Mbps vs 1Gbps.

Are there any downsides? Well sort of. Initially, we would create a conversion job and set the job to not run the final synchronization. This would allow us to uninstall the tools right before the final synchronization job. However, this caused another problem later because the job would no longer start because it detected to many changes to the OS. So we tried not uninstalling the tools at all and just running the conversion all the way through with XenTools installed. What happened? Nothing but greatness. We were able to convert a 500GB VM from Xen to VMware in a matter of a few hours.

But things to note:

  1. Do not tell the conversion wizard to install VMware Tools.
  2. Do not install VMware Tools before uninstall XenTools.
  3. You MUST uninstall XenTools on the first boot of the VM otherwise you will get a BSOD (not the end of the world)
  4. You MUST uninstall all of the devices that are Xen related from Device Manager
    1. Open command prompt as administrator
    2. Type set devmgr_show_nonpresent_devices=1
    3. Type devmgmt.msc
    4. Show hidden devices within Device Manager
    5. Look for anything Citrix or Xen and remove.
  5. DO NOT REBOOT until you have removed all Xen items. If you do not, then you will get a BSOD (again not the end of the world)
  6. After you've removed all Xen items, then reboot and verify you don't get a BSOD (see what to do when you get a BSOD)
  7. Install VMware Tools
  8. Fix your network configuration
  9. Jump for Joy
So you got a BSOD? Well I did say that there was a sort of downside. Well the BSOD is the downside. However, it is easily resolved. Simply boot into safe mode and do all the things I stated in #4 of things to note. After you've done that, then you can reboot. And safely install VMware Tools. Fix your network configurations and yay you are done.

And yes, this can also work if you are running the conversion tool on the VM itself. Same process and in fact runs even faster than using a conversion server. Well that's all!!! Hopefully this saves you time.

About the Creator


Hello, the name is Tiffanny Renrick aka Citrix Goddess. I'm a consultant with over 15 years of experience in IT. I've worked in many different industries such as manufacturing, health care, financial services and government. This blog is dedicated to the many things that I've experienced in and off the field. I will post how-tos on configurations or just gotchas that I've experienced along the way. My primary focus will typically consist of Citrix, VMware, and Microsoft products. However, I am always looking for new and interesting things to tackle in the industry so feel free to shoot me an email if you need feedback on something.


Most recent certifications



LTSR Woes

Don't stop upgrading... I have noticed over time that many organizations are picking LTSR and forgetting to upgrade. I have updated quit...