[kwlug-disc] Virtualization allocation?
unsolicited
unsolicited at swiz.ca
Sat Nov 6 16:54:26 EDT 2010
Paul Nijjar wrote, On 11/06/2010 4:18 PM:
> On the other hand, experiencing the joys of VM sprawl firsthand might
> be fun, too.
It gets old surprisingly, astonishingly, and depressingly, fast. vm
sprawl is a pain in the arse. That many more machines to track, patch,
monitor.
> It does not seem as if it is worth putting a lot of $$$
> into hardware for it (at least not on the Linux side).
It is always worth putting $ into redundant hardware (+ backups).
Although perhaps only from an admin sanity point of view (eventually -
once past the additional maintenance layer added).
However, one has to have the $ in the first place.
And the optics of it, to management, aren't good. A couple million $
into real multi-site redundant hardware, against what else the money
could be used for.
It really comes down to policy decisions as to how important to your
business live 24x7 access to your data is, and, especially, getting
RTO and RPO targets set. RTO and RPO is easy to explain, easy to get
(policy) decisions on, and lead almost directly to specification
requirements that can be signed off on.
However, once they see the costs associated with effecting those
decisions, the objectives tend to get ratcheted back. (-:
One thing frequently forgotten in such plans is room modifications -
be it to accommodate more individual boxes (which don't tend to stack
high / utilize height) - more square footage and electrical and
network connections, or to intensify per foot computing 'power'
(racks) - adding 220v lines / power distribution units, etc.
Fortunately, such costs, even if you're knocking down walls, tend to
be one offs. But you sure don't want to be doing it twice.
Since increasing room space is frequently not possible, and computing
requirements are only ever going to go up, you will be going to vm's
some day. Knowing that, even if it's not coming tomorrow, may help you
not snooker yourself for later.
So, if the big, bad, hardware, comes in - vm it up front. In the
meantime, if you have separate working boxes, or are prototyping, keep
them as separate boxes - knowing that when they become business
critical, or production, they will move to a vm.
The problem is not vm, it's accommodating requirements in the space
available, and those requirements include extent of redundancy, and
supporting elements - be it power, ups, or connectivity.
Really, the RPO/RTO drive the rest.
It is perhaps not wrong to say that RTO/RPO / redundancy / uptime,
drives vm. Big costs to the former drive maximizing the use of what
those costs purchased.
Alternately, perhaps more true in your case, today's dual core
hardware running functions/servers that don't tax the hardware feel
ridiculous, so you try to run a few vm's rather than pulling out Yet
Another Box. So instead of a massive vm / redundancy strategy, you
have multiple smaller 'vm clusters' to take advantage of the disparate
/ 'powerful' boxes available to you. Just remember, LOTS of memory -
and sometimes it's easier to stuff 16GB memory in one box, than 4GB in
4 boxes.
I suspect, in your case, you need multiple vm strategies, let alone
bringing puppet into the mix, as you note. One strategy for whatever
redundant hardware / production you have, and another to make best use
of individual powerful boxes you might have.
If after a certain yardstick of hardware power is reached, you make a
policy of always enterprise virtual boxing the hardware (you don't
install an os, you install a vm managing low level upon which you
install os'), perhaps life gets a little simpler / less complex,
moving forward. [I'm thinking vmware esx, here - can't think of the
name of the virtualbox equivalent.]
More information about the kwlug-disc
mailing list