Author Archive: Eric

Men and Mice API Client Gem for Ruby

Part of what I am doing these days is integrating Red Hat CloudForms / ManageIQ with the Men and Mice IP address management (IPAM) application. In order to reduce gem dependencies, reduce duplication, and handle some architecture issues I have been working on a client gem that uses the JSON-RPC API provided by Men and Mice.

If you are using Ruby and want to integrate with M&M please check out the gem and provide any feedback you have through Github issues.

The gem:
The source:

Helper utility for ManageIQ/CloudForms Automate

As I do Automate development on RedHat CloudForms I am seeing a lot of copy/paste code which is the opposite of DRY. Our process was to put commonly used code in a starting template for an Automate method. This is nice because someone new to development gains the experience of a nice starting point, but it results in a lot of extra code. We also lose the benefit of bug fixes to the helper methods.

To try and reduce the amount of copy/paste coding going on I created a Ruby gem that contains commonly used code and can be distributed via the standard gem installation mechanism. Its current focus is on logging and a few miscellaneous helpers.

More details can be found in the README on the github repo:

Feel free to try it out and submit bug reports or pull requests.

Getting ManageIQ instance attributes without executing methods/states

I want to use the values from the ProvisionRequestQuotaVerification instance. The problem is that using $evm.instantiate will cause the code to run. This results in an exception or error.

Digging around I found


Which works as long as you know the full path including the domain, but then you can not rely on domain inheritance.

my_instance = $evm.instance_get("/ManageIQ/Infrastructure/VM/Provisioning/StateMachines/ProvisionRequestQuotaVerification/default")

Here is my current workaround:

# Sending a fake message does not invoke the state because it is
# tied to the create message.
fake_message = "doesnotreallywork"
empty_instance = $evm.instantiate("/Infrastructure/VM/Provisioning/StateMachines/ProvisionRequestQuotaVerification/Default##{fake_message}")
# Unfortunately so are the attributes. However, now we know the real
# name of the resolved instance and can use that to get our attributes.
instance_name =
my_instance = $evm.instance_get(instance_name)
$evm.log(:info, "The instance contains: #{my_instance.inspect}

Red Hat CloudForms/ManageIQ – Examples

For me, one of the easier ways to begin to learn something is to learn by example. Even if the example does not solve my exact problem, I can use it to figure out ways of extracting the information I need and patterns for implementing new functionality. As a response to a forum question regarding provisioning approvals, Kevin Morey pointed out a Github repository that I was unaware of ( This repo is a great source of example code to get you started exploring more advanced usage of CloudForms.

I would suggest that reading the docs and playing the the code enough to feel comfortable with it are critically important if you plan to use CloudForms or ManageIQ, but this is a great reference for one way of solving the problem.

Foreman ESXi Installation Never Completes

In our environment we want to use Foreman to build our OCP hardware. It was working well except that the installation would repeatedly PXE boot to reinstall because Foreman would not know that the PXE portion was complete.

The standard notification code below was failing for some reason.

%post --interpreter=busybox --ignorefailure=true 
wget -O /dev/null <%= foreman_url %>

After running the unattended install in a Fusion VM so I could have console access I reviewed the install log and a misleading message from wget about the URL being bad. After a little digging I determine that the problem was no DNS in the ESXi installer.

Per the documentation this is expected.
Deploying ESXi 5.x using the Scripted Install feature (2004582)

Note: When ESXi 5.x is being installed using a kickstart file via PXE with DHCP enabled; the DNS settings are not available and are not saved in the configuration file.

Following what I have seen in previous installation scripts I implemented a quick fix to the Foreman OS template to add DNS settings to the installer environment so the call works and the build proceeds as expected.

%post --interpreter=busybox --ignorefailure=true 
# Add temporary DNS resolution so the foreman call works
echo "nameserver <%= @host.subnet.dns_primary %>" >> /etc/resolv.conf
echo "nameserver <%= @host.subnet.dns_secondary %>" >> /etc/resolv.conf
wget -O /dev/null <%= foreman_url %>
echo "Done with Foreman call"

OpenZIS installation guide

I have spent a day or so trying to get the OpenZIS project up and running on a CentOS machine. A lack of updated instructions and an older code base made this process harder than I expected.

While I am still investigating if OpenZIS will meet my needs, I wanted to publish what I did as a reference to others who might try to install it. The doc is in my forked version of OpenZIS. Once I have tested things more, I will likely submit a pull request to the author.

Problems with intuitive availability calculations

I often see/hear comments from people whose common sense approach to availability design includes eliminating a single point of failure.  This is a great goal when required, but care must be take that the “common sense” approach is actually achieving what is required.  I have found that using a fault tree approach to evaluating design decisions can be insightful.

The last instance of this is a book, while otherwise great, gave the advice of separating a database from the application function onto separate servers because there was a single point of failure.  This advice works when the components are not related, but not at all when they are dependent.

Here is an example analysis of an application on hardware that is 99% available.

If we follow the advice of separating concerns to eliminate the single point of failure we end up with this picture.

Unfortunately, based on the hardware availability, we have reduced the availability of the service. With a diagram it becomes apparent that the entire service is now at risk of failing when either of the two servers fail.  This incorrect thought process also leads to the idea that consolidating an application onto fewer virtualized servers always leads to lower availability.

All of this is not to say that consolidation always provides higher availability.  Here is an example of using a software stack that allows for multiple servers to serve the same purpose.  This analysis is not exact as it leaves out the increased likelihood of software failure or human error, but in the simple case of hardware availability you can see a definite improvement.

PowerCLI Move-Datastore Function

Moving datastores into folders via drag/drop can be painful. In some cases vCenter does not want to allow a drag from a long list. Here is a quick function to make moving via PowerCLI a little bit easier.

Function Move-Datastore {
    param (
            $datastore = $(throw "Datastore(s) must be provided."),
            $folder = $(throw "A destination folder must be provided.")
    if ($folder.Type -ne "Datastore") {
        throw ("The specified folder is not a datastore folder.")
    $dsList = @()
    foreach ($ds in $datastore) {
        $dsList += $ds.ID

Convenience Functions for Connecting to Multiple vCenters

In a large environment repeatedly connecting and disconnecting from groups of vCenters can be tedious so I created some helper functions and put them in my PowerShell profile.

Connect-VIServerGroup lab
Connect-VIServerGroup desktops
Disconnect-VIServerGroup lab
Function Connect-VIServerGroup {
	param (
		[String]$group = $(throw "A VI server group name must be specified.")
	Connect-VIServer -Server (Get-VIServerList $group)
Function Disconnect-VIServerGroup {
	param (
		[String]$group = $(throw "A VI server group name must be specified.")
	Disconnect-VIServer -Server (Get-VIServerList $group) -Confirm:$false
Function Get-VIServerList {
	param (
		[String]$group = $(throw "A VI server group name must be specified.")
	switch ($group) {
		"lab" {"vclab01.local", "vclab02.local"}
		"servers" {"vc001.local", "vc002.local", "vc003.local", "vc004.local"}
		"desktops" {"vc005.local", "vc006.local", "vc007.local", "vc008.local"}
		default {throw "No VI server group named $group found."}