Tag: Nutanix (page 1 of 2)

Network Automation – “The Last Mile”

With so many organisations looking to increase their ability to react to business change and continually do more with ever reducing resources, automation is the only way to solve the challenge. Nutanix has pioneered the simplification of traditional datacentre infrastructure from compute, to storage and virtualisation but many customers I speak to ask about the network.

The Dynamic Duo

Network automation appears to the be the “last mile” in their journey to a fully automated datacentre and with the SDN market place rather fragmented it’s tough for organisations to pick a solution which completes the loop.

Many organisations are also embracing a DevOps methodology to improve the processes around development and release management of new and existing applications, ultimately driving their innovation goals – with that comes the requirement to provision infrastructure in rapid time.

The public cloud has provided a great benchmark for witnessing what can be achieved through automation. Let’s face it before AWS came along how long did it take to deploy a virtual machine, on a new network within a new datacentre…..a long time. You’d spend a huge amount of time just ensuring that you had compatible kit, let alone the process of deploying hypervisors, their supporting management infrastructure, provisioning and connecting storage environments etc.….with public cloud that is all abstracted away which enables businesses to move faster.

Nutanix aims to solve the rapid deployment challenge and on-going scaling requirements whilst ensuring that “day 2” operations are also streamlined, just like in the public cloud where the infrastructure building blocks are invisible. To aid in this journey Nutanix have partnered with Mellanox to provide the automation and simplification of “day 2” operations for common network tasks to complete the loop.

Mellanox are a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure.

Mellanox switches have a REST-based API called NEO which enables tasks such as VLAN provisioning and trunking on the appropriate ports utilised by the Nutanix nodes. This enables consumers of the Nutanix Enterprise Cloud Platform to forget about VLAN provisioning requests, as these are automatically setup and migrated as VM’s move within the Nutanix infrastructure, ultimately ensuring that applications get access to the appropriate networks to communicate. This enables developers and operations teams to concentrate on delivering real business value and get on with developing the next business defining application!

Here are a couple of video’s walking through the integration. In the first example a VM will be migrated from Node A to Node B, as we automate the configuration of the VLAN on the Mellanox switches VLAN’s are only configured as required – in real-time, rather than trunking all existing VLAN’s on all ports.

In the second example we create a new VM within the Nutanix Prism console, just like the previous example the combination of Prism and NEO take care of the VLAN provisioning task ensuring that the consumer of the Enterprise Cloud Platform can get on with doing just that – consuming it, just like in the public cloud.

If you would like to know more about Nutanix and how we deliver and Enterprise Cloud Platform, check out our website; https://www.nutanix.com/what-we-do/

If you would like to find out more about Mellanox and their intelligent interconnect solutions, take a look at their website; http://www.mellanox.com/page/company_overview

Thanks for reading

Stuart

Sock stuffing

socksnake

For a while now the metrics most infrastructures, including Nutanix, are benchmarked against is IOps – effectively the speed the storage layer can take a write or read request from an application or VM and reply back.  Dating back to the (re)birth of SANs when they began running virtual machines and T1 applications this has been the standard for filling out the shit vs excellent spreadsheet that dictates where to spend all your money.

Recently thanks to some education and a bit of online pressure from peers in the industry, synthetic testing with tools like IOmeter have generally been displaced in favour of real-world testing platforms and methodology.  Even smarter tools such as Jetstress doesn’t give real world results because it focuses on storage and not the entire solution.  Recording and replaying operations to generate genuine load and behaviour is far better. Seeing the impact from the application and platform mean our plucky hero admin can produce a recommendation based on fact rather than fantasy.

Synthetic testing is basically like stuffing a pair of socks down your pants; it gets a lot of attention from superficial types but its only a precursor to disappointment later down the line when things get serious.

In this entry I want to drop into your conscious mind the idea that very soon performance stats will be irrelevant to everyone in the infrastructure business.  Everyone.  You, me, them, him, her, all of us will look like foolish dinosaurs if we sell our solutions based on thousands of IOps, bandwidth capacity or low latency figures.

“My God tell me more,” I hear (one of) you (mumble with a shrug).  Well consider what’s happened in hardware in the last 5ish years just in storage.  We’ve gone from caring about how fast disks spin, to what the caching tier runs on, to tiering hot data in SSD and now the wonders of all-flash.  All in 5 or so years.  Spot a trend?  Bit of Moore’s Law happening?  You bet, and it’s only going to get quicker, bigger and cheaper.  Up next new storage mediums like NVMe and Intel’s 3D XPoint will move the raw performance game on even further, well beyond what 99% of VMs will need.  Nutanix’s resident performance secret agent Michael Webster (NPX007) wrote a wonderful blog about the upcoming performance impacts this new hardware will have on networking so I’d encourage you to read it.  The grammar is infinitely better for starters.

So when we get to a point, sooner than you think, when a single node could rip through >100,000 IOps with existing generations of Intel CPUs and RAM where does that leave us when evaluating platforms?  Not synthetic statistics that’s for sure.

Oooow IO!

Oooow IO!

By taking away the uncertainty of application performance almost overnight we can start reframe the entire conversation to a handful of areas:

Simplicity

Scalability

Predictability

Insightfulness

Openness

Delight

Over the next few weeks (maybe longer as I’m on annual leave soon) I’m going to try to tackle each one of these in turn because for me the way systems are evaluated is changing and it will only benefit the consumer and the end customer when the industry players take note.

Without outlandish numbers those vendors who prefer their Speedos with extra padding will quickly be exposed.

See you for part 1 in a while.

A dip into Prism

A few weeks ago I was given the lovely task of attending a meeting at the last minute with no preparation time and a 3 hour drive just after I got back from annual leave.  The meeting was only for an hour so I decided to record a short 10 minute video in the morning to take them through what they’d actually be doing on a Nutanix cluster from day to day.  Knowing the type of customer I knew there would be no internet connection let alone a 4G signal.

I could have just given a normal powerpoint pitch and sent them back to sleep on a beach (which is where I still wanted to be) but I wanted to keep them awake and also elevate the conversation away from dull stuff like hardware and storage.  Usability, simplicity and time to value was the intention here so click below and leave a comment if it made sense to you.  No voice over as I’m too cheap to buy a program for my Mac that’ll do it 🙂

The consumable infrastructure (that’s idiot proof…)

Just give the customer what they need

Just give the customer what they need!

Over the last couple of months I’ve had my first experiences with Acropolis in the field. Both quite different but they highlighted two important design goals in the product; simplicity of management and machine migration.

Before I begin I want to take you back a few months to talk about Acropolis itself.  If you know all about that you can do two things:

  1. Skip this section and move on
  2. Go to YouTube and watch some classic Sesame Street and carry on reading with a warm glow only childhood muppets can bring.

I knew you couldn’t resist a bit of Grover but now you’re back I’ll continue.

Over the summer Acropolis gained a lot of happy customers both new and old.  In fact some huge customers were already using it since January thanks to a cunning soft release and that continues into our Community Edition too.

The main purpose of Acropolis was to remove the complexity and unnecessary management modern hypervisors have developed and to let customers take a step back and simply ask “what am I trying to achieve?”

It’s an interesting question and one that is often posed when too deeply lost down the rabbit hole.  For someone like me who used to spend far too long looking at problems with a proberbial microscope there’s a blossoming clarity in the way we approached these six words.  The journey inside Nutanix to Acropolis was achieved by asking our own question:

“For hypervisors, if you had to start again, what would you better and what would you address first?”

Our goal was to make deploying an entire virtual environment, regardless of your background and skill set, intuitive and consumable.  Our underlying goal for everything we do is simplicity and while we’ve achieved this with storage many years ago (which we call as our ‘distributed storage fabric’) the hypervisor was the next logical area to improve.

Developing our own management layer and beginning its work on top of our own hypervisor was a logical step and that’s what brought us to where we are today with the Acropolis Hypervisor.  You can see a great video walk through of the experience of setting up VMs and virtual networks in this video.

 

Anyway on to my first customer story.

Back in summer I spent time working with manufacturing company on their first virtualisation project.  They were an entirely physical setup using some reasonably modern servers and storage but due to many reasons they’d put off moving to a virtual platform for many years.  One of the most glaring reasons was one I hear a lot here as well as in my previous role at Citrix; “it worked yesterday just fine so why change?”  While this is true I could still be walking two miles to the local river to beat my clothes against rocks to clean them.  But I chose to throw them in a basket and (probably by magic) they get cleaned.  If my girlfriend is reading this, it could be my last blog…

Part of the resistance is related to human apathy but their main concern was having to relearn new skills, which takes focus and resources away from their business, and it simply being too time consuming.  I completely agreed.  They wanted simplicity.  They needed Acropolis.

Now, I could have done what many would and do a presentation, demo and finishing Q&A but I chose to handle our meeting slightly differently.  To allay their fears I let them work out how to create a network and create a new VM.  As we went I took them through the concepts of what a vCPU was and how it related to what they wanted to achieve for the business.  If someone with no virtualisation experience can use Acropolis without any training there can’t be any better sign off on its simplicity.  We were in somewhat of a competitive situation as well where ‘the others’ were pushing vCenter for all the management.  The comparison between the two was quite clear and while I’ll freely admit that feature to feature vSphere as many more strings to its bow, that wasn’t what the customer needed and isn’t the approach we are taking with the development of Acropolis.  We had no wish to just make a better horse and cart and the customer was extremely grateful for that.

One happy customer done, one to go…

Our second customer story, dear reader (because there is only one of you), was already a virtualisation veteran and had been using ESXi for a few years before they decided to renew their rather old hardware and hopefully do something different with their infrastructure.  Their existing partner, who’d been implementing traditional three-tier platforms previous to this chose to put Nutanix in front of them and see if we could ease their burden on management overhead, performance and operating expenditure.

While the simplicity of Acropolis was a great win for them and made up most of their decision it was how we migrated their ESXi VMs on to Acropolis that really struck me most and that’s what I’m going to summarise now.

This was my first V2V migration so I needed something simple as much as the customer and partner did and wow did we deliver.  Here is everything we needed to do to migrate:

  1. Setup the Nutanix cluster and first container
  2. Whitelist the vSphere hosts in Prism
  3. Mount the Nutanix container on the existing vSphere hosts
  4. Copy the VM to the Nutanix container
  5. Create a new VM is Prism and select Clone from NDFS then pick the cloned disk from step 4
  6. Start the VM and connect to the console
  7. Strip out the VMware tools
  8. Install the VirtIO drivers
  9.  Go to 4 until all other VMs are done

Now of course doing a V2V also has a few extra parts such as ensuring any interdependent services are migrated as a group but really that’s all you need to do.

The clever bit is the Image Service.  This is a rather smart subset of tools that convert disks like the vmdk in this example to ones used by Acropolis.  There’s no requirement for any other steps or management to get a VM across and the customer had their entire estate completed in an afternoon.  To me, that’s pretty damn impressive.

I’m really pleased with what engineering have done in such a short period of time and to think where this can go is quite amazing.

 

And now we come to the point explaining why I said this stuff was “idiot proof.”  I can only describe what happened as an organic fault in the system also known as a cock-up on my part.  I hold my hands up and say I was a dumb-dumb.  As HR don’t read this, and to be honest it’s just you and I anyway, I should be ok.

While we were preparing the cluster for the VM migrations I decided to upgrade the Nutanix software to the latest version and while this was progressing smoothly node by node I somehow managed to…erm…hmm…well……I sort of sent a ctrl+alt+del to the IPMI console.  Call it brain fade.  This obviously rebooted the very host it was upgrading right in the middle of the operation.  After a lot of muttering and baritone swearing I waited for the node to come back up to see what mess I had created…

Here’s where engineering and all our architects need a huge pat on the back.  All I had to do was restart genesis on the node and the upgrade continued.  What makes this even more amazing is that while I was mashing the keyboard to self destruction the partner was already migrating VMs – during my screw up the migration was already in progress!  If I’d have done this to any other non-Nutanix system on the planet it would have been nothing short of catastrophic.  However, in this case there was no disruption, downtime and if I hadn’t let off a few choice words at myself nobody would have known.  That is frankly amazing to me and shows just how good we’ve designed our architecture.

So how can I summarise Acropolis?  It (and Nutanix) isn’t just a consumer-grade infrastructure, it’s also idiot proof and I for one am very grateful for it 🙂

Community Edition Codes!!!!

hug

As promised, here’s your chance to get your hands on the Nutanix Community Edition.  Early access, no queuing, in-there-before-the-riff-raff goodness.

Just leave a comment below with your email address and I’ll send one out.

Please remember to use your business email address and details when signing up at http://www.nutanix.com/products/community-edition/register/ otherwise it will be rejected.

The first 10 people only and if you already have one please don’t be an arse and do it again because I will send in Steven Poitras to crush your soul.

 

*** ALL SOLD! ***

 

David

 

Nutanix Community Edition beta

Roll up, roll up, the Nutanix Community Edition is coming!

Nutanix_Web_Console

Up until now the only way to get your hands on our platform was to either be a potential customer with a proof of concept block in your datacentre, use one of our hosted data centres or to bribe me with F1 tickets.  Community Edition allows our software to be experienced on your own servers – be that in your home lab or your test environment at work.

This helps everyone in a lot of ways as end users, customers, partners and the community in general can now experience our software first hand without risk or cost.  Yes, no cost.  Community Edition is completely free.

Today Nutanix announced the private availability for the beta.  There are a couple of ways to get your hands on this:  You can join the waiting list here which will grant you access to download the installer, documentation, setup videos and the Nutanix Next community.  Another way to get hold of Community Edition is to have a friend already in the beta  who has some invitation codes so make sure you treat your NTPs nicely.

I may also have some codes to hand out later on so keep an eye on my Twitter and Facebook pages in June.

Community Edition should work on most hardware out there but there are caveats in terms of server spec and quantity to be aware of.

Firstly this isn’t like a normal Nutanix cluster as it can run on just a single server (we call these nodes) but it can also use up to four if you want to create a traditional multi-node cluster.

Here are the highlights for the HCL:

  • Nodes: 1, 3 or 4*
  • CPU: Intel only, minimum of 4 cores, Intel VT-x
  • Memory: 16GB minimum
  • Storage: RAID 0 (LSI HBAs) or AHCI storage subsystems.
  • Hot Tier (SSD): 1x SSD minimum, 200GB minimum
  • Cold Tier (HDD): 1x HDD minimum, 500GB minimum
  • Networking: Intel NICs

Community Edition uses Replication Factor 1 (i.e. no data protection) if you use a single node but Replication Factor 2 is available if you have three or more.  Replication Factor is how we protect each 4k block of data so with RF1 the data is just there once but with RF2 there’s a copy of all 4k blocks on another node in the cluster to keep your data safe event if you lose a node – or just turn it off by mistake.

The software itself is closely based on NOS 4.1.3 although Metro Availability, Synchronous Replication, Cloud Connect and Prism Central as not part of Community Edition.  We do require Pulse (our call home functionality) to be turned on as well so we can see how the software is being used by you wonderful people.  For upgrades they will of course be completed live and without interruption just as with our commercial software.

Installation for Community Edition is via a bootable USB image that runs the KVM hypervisor and then drops the Nutanix Controller VM into the hot tier on each node.  After that the hypervisor runs from the USB stick and all VMs and heavy operations are ran from the local storage.  From here you can then use Prism to create and manage your VMs, networks etc.  Oh you didn’t know we already built a KVM control plane into our software?  Yep, it’s been there for a while 🙂  Create new VMs, networks, VLANs, DHCP scopes, clone, migrate, console access is all there out of the box.  More information on how to get set up will accompany the downloads but it’s very simple as you’d expect from Nutanix.

Support for Community Edition is via the Next Community forums so I highly recommend you sign up now if you haven’t already and I encourage you to participate and share your views, thoughts and experiences here and on the Next forums once it’s released.

 

*EDIT – Ooops!  Noob error.  I previously wrote we would support 1, 2, 3 & 4 nodes.  We aren’t supporting 2 node clusters so it’s only 1, 3 or 4 nodes for CE.  Cheers!

Quality assured, not assumed.

hackintosh-dell-mini-10v

Wow, bet that runs just like Steve intended!

There are two trains of thoughts in the world of hyper convergence.  One is to own the platform and provide an appliance model with a variety of choices for the customer based on varying levels of compute and storage.  Each box goes through thousands of hours of testing both aligned to and independent of the software that it powers.  All components are beaten to a pulp in various scenarios, ran to death and performance calibrated and improved at every step.  Apple has done this from its inception and has developed a vastly more reliable and innovative platform than any PC since.  Yes I’m a fanboy…

The other train is one that can (and has) been quickly derailed.

You create a nice bit of software, one that you also spend thousands of hours building and testing but when it comes to the platform you allow all manner of hardware as its base.  Processor, memory, manufacturer all are just names at this stage.  vSAN started its HCL last year as a massive Excel spreadsheet filled with a huge variety of tin most of which was guesswork and it showed by how that spreadsheet was received by the community.   Atlantis USX also uses a similar approach.  Choice of a thousands of flavours is great if you’re buying yogurt but not so good when your business relies on consistency and predictability – oh and a fast support mechanism from your vendor.  You can imagine the finger pointing when something goes wrong…

It’s the software that matters, of course, and while this statement is correct it’s only a half truth.

Unless you can accurately test and assure every possible server platform from every manufacturer your customers use then the supportability of the platform (that’s the hardware plus the software) is flawed.  If you can somehow do the majority you’re still in for a world of pain.  Controllers on the servers may differ.  Some SSDs may provide different performance in YOUR software regardless of their claimed speeds.  Suddenly the same software performs differently across hardware that is apparently the same.

98c20_sds-nutanix-bezel-v3-620x200

At Nutanix we’ve provided cutting-edge hardware from small footprint nodes to all-flash but never once have we not known the performance and reliability of our platform before it leaves the door and is powered up by a customer.  You can read about all six hardware platforms here.  When we OEM’d our software to Dell we gave the same level of QA to the HC appliances too.

We know our hardware platform and ensure that it works with the hypervisors we support.  We then know our software works with those hypervisors.  We own and assure each step to provide 100% compatibility.  If you’re just the software on top, you have thousands of possible permutations to assure.  Sorry I mean assume.

We own it all from top to bottom and the boxes, regardless of their origin or components, are 100% Nutanix.  This is how we can take and resolve support questions and innovate within the platform without external interference.  Customers love the simplicity of the product as you probably know but their is an elegance in also displaying a structured yet flexible hardware platform.  Ownership is everything.

I’ve lost count of the flack I’ve taken by “not being software only” as that’s “the only way to be truly software defined.”

What bollocks.

It is the software that matters but if as a company you cannot fully understand the impact your software has on the hardware it must run on then the only person you’re kidding is yourself and more worryingly the first person it hurts is your customer.

Let’s see who else follows the leader once again.

“Simplicity is the ultimate sophistication”

…so said Leonardo Da Vinci.  Why make things hard?  Why make your own life harder when with some effort everything can be simplified and better.  This is our approach and it touches Nutanix employees as well as our customers.Leon says relax

The one thing that still staggers customers about Nutanix is how flipping easy it is to get blocks installed into their environment.  To give you an idea of how I do it and the time it takes from getting a completely blank (or even previously configured proof of concept box) installed, take a look at this:

  1. Image the server.  We and our partners use a tool called Foundation which pushes a vanilla image of ESX, Hyper-V or KVM down to the nodes with Nutanix software configured. This takes 50 mins because we do it over a 1GB switch and is automated.
  2. Configure the new cluster.   Here we just add in IP addresses for the hosts, management ports and Nutanix CVM. This takes 2 mins via an intuitive webpage or you can do it via the NCLI if you want to appear a true geek.
  3. Create the storage pool.  Another 14 seconds.
  4. Create the first container, its policies and present to hosts.  This final stage took me 22 seconds.

That’s it.

Total time for me is under an hour.  Total time for the customer under 10 minutes (if you include racking time!)

Now we’re at the stage where the customer can start building VMs (aka doing the things that matter) while the infrastructure becomes invisible – just as it should be.

At no point do they need to revisit the storage other than to create new containers or change policies.

Go order something from VCE, NetApp, EMC or any SAN and try doing the same thing.  In fact try to do the same thing with any hyper converged competitor as well.

This is the power of simplicity and it’s only going to get easier for our customers.

There’s a storm coming. Big deal…

Remember the ominous end scene in The Terminator (yes there’s a ‘the’ in it) where Sarah Connor is showing how badly prepared she is traveling to Mexico without even a simple Spanish phrase book?  “There’s a storm coming,” says the little boy.  Sarah, looking more than a little nervous, drives off into the future to meet the rain clouds head on.  Smart move?

Bad preparation

To summarise she was badly prepared for where she was going and what was to come.  I didn’t even see a rain coat in the Jeep she was driving and don’t get me started on the blatant lack of a roof.  “There’s a storm coming and you’re wearing the wrong clothes, your car will rust from the floor panel out and frankly you should have learned basic conversational Spanish before you left,” is what the boy should have said.  His dad probably warned him away from people like that soon after.

Anyway, storms are a ballache at the best of times and generally the ones you and I know about in our working life are boot-storms.  Just like the type that’ll soak you to your skin and prove that a fancy bandana is no replacement for an umbrella we need to adopt the right technology to overcome this inevitable problem and that’s what I’m going to address today.

This morning I was with a customer who’s looking to start a VDI deployment with 500 desktops and grow to around 3000 depending on the take-up in the business.  The major headache they’d read about was IOps and in particular the rather nasty side effect booting all their VMs at once has on the systems as a whole.  SANs are not very good at serving IOps.  They’re not that great at anything other than storage really and that’s why there are lots of bandage technologies out there to cover up the holes and disguise how awful the performance can be if you tried to run VMs from them.  Now, I’m all for keeping massive investments going so if you want to throw some further expense in front of a SAN you’re locked into for four more years go right ahead.  Come and talk to me when the steak dinner invites and massive renewals come in.

Thankfully today the customer was ready for real change which is why I was discussing Nutanix’s approach to VDI and all the other wonderful challenges desktop virtualisation brings with it.

Boot storms to us at Nutanix are nothing more than a light shower with a raincoat on.  Preparation to mitigate the IO spikes for any amount of desktops is built in to our product and removes the worry for the customer.  Let me explain…

In a typical compute+SAN architecture you have a bunch of servers running VMs.  They talk down through a bunch of storage fabric to a couple of storage controller heads and then down to the disk shelves.  The shelves can only server up a finite amount of IO.  If you boot 10 machines that’ll be fine.  100 could probably work OK too.  Go to 200 and above and you’re looking at major stress being put onto everything below those servers.  The fabric might not be saturated but you can bet your last weather-proof North Face coat that the disk shelves or controllers will be.  The more VMs that boot, the slower the whole system will become as each VM has to wait for IO to be served.  Now of course you could stagger boot times, do them all at 4am before people come into work but how about the time when you need to fire them all up immediately after invoking DR or applying a critical patch during the working day.  Big trouble, Sarah.  Big trouble.

Because Nutanix is a distributed platform our approach to boot storms are to look to tackle it per node.  If we assume 80 to 120 VDI VMs could  run on a single node that’s the only thing we need to calculate for.  Once we know how many VMs each node can handle in terms of boot and general density (I’ve had IOmeter tests show 25,000 random reads and 18,000 random writes on regular 3000 series nodes a couple of weeks back) then all we have to do is add more of the same node type to get to the desired total VMs.  That’s how we scale and design clusters.  It’s that easy.

Here’s a diagram from The Nutanix Bible on how Shadow Clones work.  Click it to go to the article.

Shadow Clones

Because we read and write locally, or in some cases read over the 10GiB switch, the majority of all IO is done locally to the VMs and via the SSD tier.  Even better is that if we see blocks of data that are required by lots of VMs on a node we’ll kick in Shadow Clones to ensure that all VMs get that data localised right away.  Data locality is the key here but it’s only one of the technologies we use to make sure the cluster as a whole is predictable and efficient.  The best part is all of this is done on the fly without any administration.  We take care of it invisibly and without disruption.

So next time you start to worry about storms, just select the right clothing before you go playing in the rain and you’ll be just fine.

 

Gracias por leer.

Loser

Robocop_Mediabreak_Casey_Wong_jpg_878×473_pixelsYou may well be a loser.  If you look after a SAN for virtualisation you’re a loser.

Now, you’re going to take that the wrong way, I know that.  My point is you’re losing time.  Time from doing more productive things like shopping on Amazon, getting a high score on Forza or making a cup of tea.  You may even be better suited to getting some real work done.

Anyway, to show you just how flipping easy Nutanix makes creating storage for your hypervisor I recorded a very simple video.  In it you’ll see how to provision a container which to ESX is a datastore.  I’ll also show you how that presents itself to ESX, how to unmount it from a particular host and also add some policies around dedupe and compression.

If you can use an iPhone you can use Nutanix.

Older posts

© 2017 Nutanix Noob

Theme by Anders NorenUp ↑