60 Blog posts in a 2014 – sometimes I don’t understand how that happens. Is that a lot? A little?
I have always said that I do not blog for the sake of blogging, but to share information and my thoughts. It is good to see that people find this useful – and do take an interest in what I have to say.
The 5 posts that received the highest number of visitors in the past year were:
- The Quickest Way to Get Started with Docker
- VSAN - The Unspoken Truth
- vCenter is Still a Single Point of Failure
- Nova-Docker on Juno
- Introducing VIRL Personal Edition
What did I blog about? I calculated this with the tags I attached to each of the posts:
- OpenStack (30)
- VMware (19)
- Cloud (12)
- DevOps (7)
- VMworld (6)
Administration, Architecture, Design and Docker (5)
(Ok I lied – that was more than 5 topics)
If you would like more of a graphic presentation – here you go.
For me it has is interesting to see that my focus has changed, and not VMware centric any more. I guess that is an expected thing – with my current role and also that I have a more overall solution in mind during my daily work – which is so much more than virtualization.
I hit a Milestone – on December 03, 2014 where I surpassed over 2,000,000 Pageviews on my blog.
It took me 5 years to achieve my first million, the second was achieved in only two years. I have been blogging for 7 years now, it always good to share my thoughts, my insights, and sometimes my rants. I hope you all benefit, and will continue to make use of the articles I write.
Here is looking to a great 2015, filled with opportunities, community and exciting things ahead.
And how did 2014 turn out for you? Please feel free to leave your comments and thoughts below.
Catch you on all the flip side!
I would like to make clear a few things from the start.
- This is not a VMware bashing post. (Even it might be perceived as such)
- I hold all three of the authors in very high regard.
When reading this document I was hoping to hear something new, something refreshing, something that VMware customers have been asking and verbally complaining about for a very long time.
Alas – this is not the case.
vCenter is a single point of failure. There I have said it. I have said it before, I will continue to say it in the future until this if fixed.
In the following article I will be taking statements directly from the text, providing my thoughts as I go along.
Great start – This document will discuss the requirements… After re-reading that statement – I understood what VMware did. VMware has not provided us with a method of providing HA for vCenter – but rather – have explained what they think should be defined as High Availability be for your vCenter server.
The authors then go into explaining all about MTBF and MTTR – they did a great job. I will not go into the details here – you should read the document
SLA’s are extremely important – and for and every environment – an SLA is something different – yours may differ from your neighbor, so it is important to understand what you need to achieve.
They then go into describing the tests that were run in order to measure the amount of time it would take for a vCenter server to recover. Fair enough.
Here is where it starts to get interesting. Let us look at this in a picture.
Bottom line is – that once a vCenter server has gone down – it will take a little over 5 minutes until it is fully functional.
This part of the document states that having vSphere HA – and having vCenter running as a virtual machine actually provides some level of protection.
A dedicated management cluster is of course advised – that way you have a dedicated environment to run your management components without having to worry that the client workloads will interfere.
Also putting the database in the same management cluster is recommended – seems logical.
I then noticed that the only SQL version that is supported for vCenter 5.5 is Enterprise and up – which was news to me. I gather this is a documentation bug – because the VMware Product Interoperability Matrix says that Standard is supported.
It would really be great if they would explain exactly how that would be possible and how that should be done. It still might be possible? How exactly? In order to protect vCenter – I will need another vCenter? Licensing? Implications?
Emergency Restore was a new one to me –
but it is only available in vSphere Data Protection Advanced Edition – that is something that was left out – which is approximately $1,500 (list price) / per socket. As a result of the feedback received in the comments – I have amended this. It seems that Emergency restore is also available in all editions of VDP – not only Advanced (more information here).
OK, enough copy and paste. This piece above is what set me off.
Essentially what VMware are saying the following:
- Use a separate management cluster
- Run vCenter in a VM
- Run the Database in a VM
- No matter what happens – if your vCenter crashes then it will be down for 5 minutes.
- Your workloads are safe because they are running on your ESXi hosts are protected by HA and can continue running without a vCenter server.
Points 1-4 - I totally agree. With point #5 I also agree.
But there are environments that cannot afford to have a 5 minute outage. VMware might say that having vCenter go done and out for five minutes, is not really an outage per se, but I would very much like to disagree here.
If I cannot provision a new VM because my vCenter is not available – that is an outage.
Where would this be an issue?
- VDI environments – If a user logs in and his desktop is not provisioned because vCenter is down? How about the whole 100 or 1000 employees?
- Highly automated environments – ones that use products like vRealize Automation or vRealize Code Stream. Imagine having your code builds fail for 5 minutes because vCenter is not available? The whole continuous delivery process breaks down.
I might be exaggerating a bit – but I have voiced this more than once – I started more than 4 years ago - Troubleshooting Tools for vCenter.
vCenter is probably the most crucial part of your virtual infrastructure. And all that you can expect from from an availability perspective is that you have to accept as a given that vCenter might go down for 5 minutes at a time.
There are environments that will accept this - I would actually say that the large majority are fine with this – but what about those who are not? Those who cannot afford having this kind of outage? What do they do?
There used to be a product called vCenter Server Heartbeat – which was retired.
Where are those promised options? When will they be available? What do companies do in the interim? Pray that there vCenter does not crash?
Embedded below is the Twitter conversation that sparked this post.
The scenario on which VMware based their whole presumption was on the fact that the host on which vCenter was running would crash, HA would kick in and the VM would be restarted on another host within 5 minutes.
The whole scenario of having a problem with your database, or a vCenter service problem (and believe me it happens), that was not covered.
Take the following scenario. You have a vCenter appliance. For some reason the vCenter service stops responding on the VM. There is no automatic restart. Eventually you get a call, something is not right. You try and restart the service, nothing happens. You restart the VM, nothing happens.
Now what? Open a call with VMware? Deploy another vCenter appliance and hope that nothing goes wrong? I can guarantee you that will take a hell of a lot longer than 5 minutes.
Why does the document even go into providing a clustered solution for the MSSQL database? Because that might fail? Yes it could happen. But guess what – the whole system is only as strong as its single weakest link. So providing a clustered database solution might give some piece of mind – but it will not protect you from an outage – because there is no way to cluster a vCenter server.
In conclusion – yes there are considerations. I would definitely not say that VMware have a High Availability solution for vCenter, they have done their best to minimize the impact it will have when it vCenter crashes – but that is not HA!
What do you think? Am I making a mountain out of molehill? Or this a real and valid concern? Please feel free to leave your comments and thoughts below.
OpenStack is a living product – and because it is community driven - changes are being proposed almost constantly.
So how do you keep up with all of these proposed changes? And even more so why would you?
The answer to the second question is because if you are interested in the projects then you should be following what is going on. In addition there could be cases where you see that the proposed blueprint could break something that you currently use or is in directly contradiction to what you are trying to do – and you should leave your feedback.
OpenStack wants you to leave your feedback – so please do!
About the first question - the answer is here – http://specs.openstack.org. This is an aggregate of the new blueprints (specs) for each of the projects as they are approved.
I use RSS feeds available for the blueprints which helps me keep up to date as soon as a new blueprint is added.
I have compiled an OPML file with all the current projects that you can add to your favorite RSS reader.
You can download it in the link below.
I hope this will be as useful to you as it is to me.
As always, comments, suggestions and thoughts are always welcome.
In this post we will go through the steps needed to actually contribute code. This will not be a detailed tutorial on how to use git and gerrit, and its functionality, but rather a simple step by step tutorial on how to get your code submitted for review in OpenStack.
First we start up the container.
Since playing around with real OpenStack code is not a good idea when you are just learning – there is a sandbox repository where you can perform all your tests.
First things first we need to clone the repository so that we have a local copy of the files
git clone https://github.com/openstack-dev/sandbox
What this does is, you copy all the files in the repository to a folder of the same name under your current working directory. Depending on the size of the repo – this could take seconds or minutes.
Enter the directory and look at the files.
You will see the files are the same as the those on the repository on the web.
With the exception of one file the .git folder which is not visible on the github repository. This link will give you some more explanation as to what is in the folder.
Now make sure you have the latest code from Github.
git checkout master git pull origin master
Create a branch to do your work in that you'll do commits from.
git checkout -b MYFIRST-CONTRIBUTION
Now we get to the changes.
I am going to create folder named maish with two files inside, like the structure shown below
Here I just created empty files – but it could be correcting someone else’s code or adding new code, the process is the same.
Once you have completed your work you will need to add all the changes and push them back up to the original branch.
Add all the files and changes by running
git add .
Next, you commit your changes with a detailed message (and you should really understand how to commit proper changes) that'll be displayed in review.openstack.org, creating a change set.
git commit –a
A VI editor will open were you now can add the reasons for your change and mention any closed bugs. Follow the conventions about git commit messages giving a good patch description, adding a summary line as first line, followed by an empty line, descriptive text, backport lines and bug information:
Save the file by typing :wq, and you will see that your files and changes were added.
Set up the Gerrit Change-Id hook, which is used for reviews, and run git review to run a script in the /tools directory which sets up the remote repository correctly:
You might be asked prompted to accept the SSH key, type yes.
If all goes you will see something similar to the output below.
Looking back at the github repo – you will not see any changes. You might ask yourself – where did my code go?
The reason you do not see any change – is that before any code is accepted in the master branch it has to be reviewed, both by an automated set of tests and also by humans.
So where did it go?
If you go to https://review.openstack.org/#/ you will see the change you just submitted
Clicking on the change will take you to the details where you can see the following:
The change information.
The commit message (you will notice that the Change-Id was automatically added)
The status of the reviews and feedback. This could be an automatic test or an actual person who reviewed and left a comment.
Here are the files that were checked in.
And the comments themselves
I can also make and additional change as well – this could be based upon feedback from one of the reviewers, a failed test, or any other reason. Here I added another file – file3.
I need to add the changed files and commit them again – this time with a flag –amend. You can change the commit message.
git add .
git commit –a –-amend
And then push upstream.
Going back to the web page you will see a few differences.
The new commit message.
And that the code is now added as a new patch set – i.e. a new version of the code.
One last thing.
Since this is a sandbox – please keep it clean. That means when you are finished with your tests you should mark your commit as Abandoned.
The status will change.
And this will change the status in you list of changes to Closed
I hope this was useful and will alleviate some of the concerns and people have with contributing code back into OpenStack.
Please feel free to leave your comments and feedback below.
This is an internal Cisco tool which is so useful – that I am really pleased that it is finally available for public consumption.
VIRL Stands for Virtual Internet Routing Lab
What Is VIRL?
VIRL is comprehensive network design and simulation platform. VIRL includes a powerful graphical user interface for network design and simulation control, a configuration engine that can build complete Cisco configuration at the push of a button, Cisco virtual machines running same network operating systems as used in Cisco’s physical routers and switches, all running on top of OpenStack.
How Does VIRL Work?
VIRL uses the Linux KVM hypervisor and OpenStack as its virtual machine control layer, with a powerful API enabling the creation and operation of VMs in a simulated network topology. Users design their network using the VM Maestro design and control interface, with network elements such as virtual routers, switches and servers. The design is translated into a set of virtual machines running real Cisco network operating systems.
What Does VIRL Offer?
Design, learn and test with virtual machine running real Cisco network operating systems – IOS, IOS XE, IOS XR and NX-OS as well as virtual machine running 3rd party operating systems. Build highly-accurate models of real-world or future networks, study the behaviour and configuration of routing protocols, break and fix your network and understand how to troubleshoot with a powerful integrated platform.
The original information can be found here
Cisco VIRL Personal Edition annual subscription license provides a scalable, extensible network design and simulation environment for several Cisco Network Operating Systems for students. This includes IOSv, IOS XRv, NX-OSv, CSR1000v as well as third party images such as Ubuntu Linux.
Educational pricing is available for this product for college students, parents buying for a college student, or teachers, homeschool teachers and staff of all grade levels – limited to one purchase.
VIRL enables users to:
• Build highly-accurate models of real-world or future networks.
• Learn and test with ‘real’ versions of Cisco network operating systems – IOSv, IOS XRv, NX-OSv and CSR1000v.
• Integrate virtual network simulations with real network environments.
• The download includes VIRL Personal Edition 1.0 Pre-Release software with a single-user annual license to manage up to 15 Cisco nodes.
You can view a short demo of the product in the link below.
One of the most daunting and complicated things people find when trying to provide feedback and suggestions to the OpenStack community, projects and code – is the nuts and bolts of actually getting this done.
Scott Lowe also posted a good tutorial on Setting up the Tools for Contributing to OpenStack Documentation. But the process itself is still clunky, complicated and for someone who has never used git or gerrit before – highly intimidating.
That is why I embarked on providing a really simple way of starting to contribute to the OpenStack code. I was planning on writing a step-by-step on how exactly this should be done – but Scott’s post was more than enough – so need to repeat what has already been said.
Despite that there are still some missing pieces in there which I would like to fill in here in this post.
Before we get started there are a few requirements/bits of information that you must have, and some things that you need to do before hand - in order for this process to work.
They are as follows:
- A launchpad account
- An Openstack Foundation account. (Use the same email address for both step 1 and step 2).
- A signed Contributor License Agreement (CLA).
- A gerrit http password.
- Somewhere to run Docker (I wrote a post about this - The Quickest Way to Get Started with Docker)
Let me walk you through each of the steps.
1. A Launchpad Account
Sign up for a launchpad account – http://www.launchpad.net
Register for a new account – you will need to provide some information of course
You will need to of course verify your email address. Go to your inbox and click on that link in the email you have received and validate your address
2. Join the OpenStack Foundation
Sign up for an Openstack Foundation account – https://www.openstack.org/join
And fill in the details
*Remember* – use the same email address you used to sign up for the launchpad account.
3. Sign the CLA
Go to https://review.openstack.org/ and sign in with your Launchpad ID (from Step 1)
If you have not logged out of the Launchpad – you should be presented with a screen like the one below.
Some of the information will already be populated for you. You will need to choose a unique username.
We will not choose an SSH key at the moment. Scroll to the bottom of the screen and choose New Contributor Agreement.
You should choose the ICLA
Review the agreement and understand what you are signing and then fill in the details below.
If everything is Kosher then you will be presented with the following screen to confirm
4. A gerrit http password
Remember the username you chose from the previous step? this is the one you should use.
On that same Settings screen, choose HTTP password and enter your username and Generate Password.
Don’t worry – the password has already been changed – the minute I published this post.
And we have finished all the registration and administrative things.
Just to recap – you will need these details for later (you need to replace them with your relevant details instead)
- Your Name – Maish Saidel-Keesing
- Email Address – maishsk@XXXX.com
- Gerrit Username – maish_sk
- HTTP Password - zwZW0X5NAGVP
Running the Container
Now that we have all the parts – it is really simple to get started.
The steps are as follows:
docker pull maishsk/openstack-git-env
This will retrieve the container from the Docker Hub. Once the container has been retrieved you can launch the container.
A few points to note beforehand.
- The container will always start a bash shell. The aim of this environment is to allow you to contribute to the OpenStack Project – so it has to be interactive.
- You have to provide 4 variable to the run command – it has to be all four – otherwise the container will not launch.
- The container will automatically upload an SSH key to gerrit – to allow you to connect and contribute your code upstream. It does not remove the SSH keys when done – this you will have to do manually.
The command to launch the container would is as follows – and remember you need to take the values from above.
docker run --name="git-container" \-e GIT_USERNAME="\"Maish Saidel-Keesing\"" \
-e GIT_EMAIL="maishsk@XXXX.com" -e GERRIT_USERNAME=maish_sk \
-e GERRIT_HTTP_PASSWORD=zwZW0X5NAGVP -i \
A few words about the variables
--name="git-container" – this is just to identify the launched container easily
-e GIT_USERNAME="\"Maish Saidel-Keesing\"" – the quotes have to be escaped \"
-e GIT_EMAIL=maishsk@XXXX.com – Don’t forget to put in your real email address!
Once the container is launched – provided you have followed all the steps correctly and the variables are also correct - you will see some output printed to the screen with the SSH key that was just created and you will also be able to see that key in the gerrit web interface as well.
You can see that the comment on the web is the same as the hostname of the container.
Embedded below is a screencast of the launching of the container.
In the next post – I will show you how to actually contribute some code.
If you have any feedback, comments or questions, please feel free to leave them below.