[kwlug-disc] Virtualization allocation?
unsolicited
unsolicited at swiz.ca
Sat Nov 6 15:20:13 EDT 2010
Paul Nijjar wrote, On 11/06/2010 12:58 AM:
> I am (foolishly) starting to think about virtualization.
>
> Right now I have a non-virtualized Linux server that serves up several
> roles:
> - Trouble ticket system
> - Nagios alerts
> - Cacti graphs
> - Certificate generation
> - Internal mail server
> - Lightweight FTP repository
> - ...
>
>
> Naturally, this is one of those servers that started out being a
> development/staging/proof-of-concept machine and grew. The trouble is
> that the machine works pretty well even though it has all of these
> roles.
Then let sleeping dogs lie. If it ain't broke ...
However, as you look to the future (increasing users/load, new
versions, software doing more things / interacting with more software)
vm's may well be a migration path to look at. Or, if you're looking to
migrate off the current box to 'faster' hardware, more process
separation, are moving from testing to more rigorous / controlled /
documented production.
One of the big reasons to move, in the Windows world, is apps tripping
over each other - e.g. IIS. Machine separation becomes the only
reasonable answer, especially when you have multi-app servers, where
the apps /responsibilities are two different users / departments.
> As I dabble my pinky-toe into virtualization, I am wondering whether it
> makes much sense to split up a box that offers these 5-10 services into
> 5-10 distinct VMs.
>
> I understand that recovering from a bad hardware failure will be
> problematic. I understand that (in principle, at least) splitting up
> roles increases security. But I am having a hard time justifying why I
> would want to take an old Pentium III that works well and turn it
> into several slices of some much more powerful (and expensive!)
> machine.
Others, John and Raul, have already said it better, but see below. It
depends upon, in this PIII case, if it's being asked to do more (e.g.
more users using same app), or if you're concerned about down time if
there's a failure - particularly if you're uncomfortable just popping
that disk into faster hardware and getting a running app again in
short order.
> I would not be splitting up this box right away even if I decided to,
> but I am at the stage where I am thinking about what kind of hardware
> I am going to need to support virtualized servers in production.
>
> Convince me?
>
> For those of you who do Linux server virtualization: how do you split
> up the load?
- you keep adding vm's until you feel you're using the box's resources
(CPU / memory) to a reasonable level, while allowing for spikes.
- by having multiple boxes running multiple vm's to hand, when
something grows, or you add / test / prototype deploy, you move it to
another vm box, and continue monitoring performance.
- if you multiple apps interacting with each other on one box, and
something is starving another app, separating it then to accommodate
the larger hardware requirements is more painful than keeping it
separate in the first place.
- doing this requires up front work to make sure it truly is user
transparent as to server physical location. e.g. ldap, to whatever
extent that makes sense.
- if you're using a lot of disk space, hard, it doesn't go into a vm.
(SQL)
- vm's are all about uptime. Be that uptime for hardware issues,
uptime for patches and rollback, or speed to deployment (especially
when serving multiple departments with their 'own' servers).
- consider your per function RTO (Recovery Time Objective - how long
can you be down), and RPO (Recovery Point Objective - when you come
back, how recent must you come back as of [5 minutes pre-shutdown, or
24 hours / last backup]. As your tolerance for downtime decreases, you
increase your hardware redundancy / failure tolerance. As you do that,
things get very expensive. And since most servers do not max out CPU /
memory, you try to max out the utilization of this very expensive
redundant hardware, and gain the benefit of that hardware / power /
redundancy, in the meantime.
- running out of space and plugs to add yet another server? Current
boxes not maxing out memory / CPU? vm's may make sense.
- multi-site? Worried about power / net going down in a building,
cutting off other building's functionality? vm servers at each
location with failover may be appropriate.
- it ain't easy, and it ain't cheap, and it sure increases your
complexity. How many clones of yourself do you have?
- deploying multiple / identical servers multiple times, perhaps to
multiple locations? Perhaps local file/print servers? [That attach to
redundant back end disk arrays?] e.g. Access to central remote goes
down, switch to using local copy in the meantime. (Assuming the local
copy is sufficiently current.) Building A may be storing local files
and backing up (rsync) to Building B. Same for building B back to A.
- vm's are but the tip of a larger effort, and the larger effort is
more complex than just 'should we vm it'? That larger effort,
typically for redundancy / ease of use or maintenance, as above,
involves multiple sites, more complex hardware (fibre-channel drive
racks, for example), and associated more robust back ends (e.g.
multiple sync'ed ldap). If you try to take vm's by itself, without
taking this larger context into consideration / master plan, you're
actually making your life harder. (As, when you do get to these other
considerations, you've created a larger or more complex environment to
then be adapted.)
- I suspect the correct answer is, on a per function basis - what is
your tolerance for down time. (And can you manage expectations.)
- if you can't manage expectations, i.e. management makes
unpredictable demands that you can't influence, then building things
as vm's in the first place provides you with adaptability and an
ability to roll as you need to, after the fact. Need more cpu, memory,
speed - tweak. Applications interacted in unexpected ways - on
separate vm's no interaction happened in the first place.
- don't forget ... if you are concerned about network traffic
between two vm's that if on the same server network traffic isn't an
issue, two vm's on the same server with a lot of network traffic
between them - that traffic should stay internal / not actually get
out to the physical network.
More information about the kwlug-disc
mailing list