Discussion:
YA "10 reasons to switch to Linux"
Lan Barnes
2008-04-30 18:03:41 UTC
Permalink
This is one of those recurring stories, like "Popular Mechanics's" flying
car. Still, with the exception of #7 (Xen --because it has been recently
debunked here), I think she nailed it.

http://blogs.techrepublic.com.com/10things/?p=342&tag=nl.e112
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Tracy R Reed
2008-04-30 18:39:41 UTC
Permalink
Post by Lan Barnes
This is one of those recurring stories, like "Popular Mechanics's" flying
car. Still, with the exception of #7 (Xen --because it has been recently
debunked here), I think she nailed it
Huh? Xen works as advertised and it is great. It is saving my company
tons of money and gives us capabilities we did not have before which
helps us react quickly to our clients needs. I wouldn't say it has been
"debunked" at all.
Michael J McCafferty
2008-04-30 18:54:12 UTC
Permalink
Post by Tracy R Reed
Post by Lan Barnes
This is one of those recurring stories, like "Popular Mechanics's" flying
car. Still, with the exception of #7 (Xen --because it has been recently
debunked here), I think she nailed it
Huh? Xen works as advertised and it is great. It is saving my company
tons of money and gives us capabilities we did not have before which
helps us react quickly to our clients needs. I wouldn't say it has been
"debunked" at all.
...and I will add that plenty of people are buying servers from us to
perform server consolidation and setting up tons of Xen VMs on a single
bigger box. My theory is that people make more "machines" total when
they can make virtual ones, which ultimately sells more hardware than
actually needed since the VMs need RAM to exist.
The issues we see are that the Xen enabled kernels for Ubuntu 7.10 and
Debian 4.0 (haven't tried 8.04 yet) do not have some of the drivers we
need for our hardware. So, we have to be careful to ask if people plan
on using Xen if they buy the servers that Xen kernels don't have the
drivers yet. We have had some customers try (succeed ???) to compile
them in, but most just want the pre-compiled binary kernel packages to
work. Specifically, our issue is with the 3ware 9650SE RAID card.
A smaller number of our customers use VMware and there are no driver
issues there. Again, they make more "machines" than they would have if
they used a whole piece of hardware for each machine.... because it's
just so much cheaper to add a VM. Ironically, I am sure we sell more
real servers because of virtualization. It certainly doesn't NEED to be
that way. How *you* use virtualization is up to you ! :o)
--
************************************************************
Michael J. McCafferty
Principal, Security Engineer
M5 Hosting
http://www.m5hosting.com

You can have your own custom Dedicated Server up and running today !
RedHat Enterprise, CentOS, Ubuntu, Debian, OpenBSD, FreeBSD, and more
************************************************************
Lan Barnes
2008-04-30 19:05:24 UTC
Permalink
Post by Michael J McCafferty
Post by Lan Barnes
Post by Lan Barnes
This is one of those recurring stories, like "Popular Mechanics's"
flying
Post by Lan Barnes
car. Still, with the exception of #7 (Xen --because it has been
recently
Post by Lan Barnes
debunked here), I think she nailed it
Huh? Xen works as advertised and it is great. It is saving my company
tons of money and gives us capabilities we did not have before which
helps us react quickly to our clients needs. I wouldn't say it has been
"debunked" at all.
...and I will add that plenty of people are buying servers from us to
perform server consolidation and setting up tons of Xen VMs on a single
bigger box. My theory is that people make more "machines" total when
they can make virtual ones, which ultimately sells more hardware than
actually needed since the VMs need RAM to exist.
My uninformed, unprofessional take on VM has always been that it's a boon
to those who (1) use windows servers, or (2) have a gazillion test
environments that they need to CM.

#1 because windoze is a jealous god, and the only sure way to restart a
service is to bounce the machine, which is drastic if more than one
stretegic service is aboard.

#2 for obvious reasons to a quality guy and SCM specialist.

But if the server(s) are there just to run services, just use Linux naked
because you can stop and restart services safely.

Comments?
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Tracy R Reed
2008-04-30 20:15:48 UTC
Permalink
Post by Lan Barnes
But if the server(s) are there just to run services, just use Linux naked
because you can stop and restart services safely.
There's more to it than that. My big project of the last year or two has
been:

http://xenaoe.org

There's more to virtualization than just throwing up boxes.
Virtualization gets me higher availability and decouples my server from
any one piece of physical hardware. And, of course, the cost savings are
huge. I've got 64 servers in just 8u of rackspace thanks to
virtualization. Now I not only save tens of thousands on hardware but I
save around $800/mo from not having to rent a second rack.
Michael J McCafferty
2008-04-30 20:27:34 UTC
Permalink
Nevermind the cheap space... tell me about the power you saved !
Post by Tracy R Reed
save around $800/mo from not having to rent a second rack.
--
************************************************************
Michael J. McCafferty
Principal, Security Engineer
M5 Hosting
http://www.m5hosting.com

You can have your own custom Dedicated Server up and running today !
RedHat Enterprise, CentOS, Ubuntu, Debian, OpenBSD, FreeBSD, and more
************************************************************
Mark Schoonover
2008-04-30 20:43:39 UTC
Permalink
On Wed, Apr 30, 2008 at 3:27 PM, Michael J McCafferty
Post by Michael J McCafferty
Nevermind the cheap space... tell me about the power you saved !
Power, and cooling.
Post by Michael J McCafferty
Post by Tracy R Reed
save around $800/mo from not having to rent a second rack.
--
************************************************************
Michael J. McCafferty
Principal, Security Engineer
M5 Hosting
http://www.m5hosting.com
You can have your own custom Dedicated Server up and running today !
RedHat Enterprise, CentOS, Ubuntu, Debian, OpenBSD, FreeBSD, and more
************************************************************
--
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list
--
Mark Schoonover, CMDBA
http://www.linkedin.com/in/markschoonover
http://marksitblog.blogspot.com
***@gmail.com
Lan Barnes
2008-04-30 21:18:22 UTC
Permalink
Post by Mark Schoonover
On Wed, Apr 30, 2008 at 3:27 PM, Michael J McCafferty
Post by Michael J McCafferty
Nevermind the cheap space... tell me about the power you saved !
Power, and cooling.
I'm confused (in Linux) on how this saves, since I would expect that
throughput, memory, and context switching on VM boxen would _at best_ only
equal running all those services on one box. What am I missing?

Doesn't it add up to the same number of instructions per time uint, the
same memory load, the same disk space (except VM should need marginally
more for context switching)?
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Tracy R Reed
2008-04-30 21:59:39 UTC
Permalink
Post by Lan Barnes
I'm confused (in Linux) on how this saves, since I would expect that
throughput, memory, and context switching on VM boxen would _at best_ only
equal running all those services on one box. What am I missing?
Sure. But the vast majority of our servers are idle 99% of the time. The
stuff sits on separate servers mainly for reliability and
maintainability reasons. Also sometimes we need different versions of
stuff, as you pointed out in your previous post about CM. We only need
the rare burst of speed and it's very unlikely that any two machines
need that burst of speed at the same time.
Post by Lan Barnes
Doesn't it add up to the same number of instructions per time uint, the
same memory load, the same disk space (except VM should need marginally
more for context switching)?
Yep. Except we can do it all on one motherboard and power supply and
unit of rack space instead of spreading it out over many.
Lan Barnes
2008-05-01 00:06:16 UTC
Permalink
Post by Tracy R Reed
Post by Lan Barnes
I'm confused (in Linux) on how this saves, since I would expect that
throughput, memory, and context switching on VM boxen would _at best_ only
equal running all those services on one box. What am I missing?
Sure. But the vast majority of our servers are idle 99% of the time. The
stuff sits on separate servers mainly for reliability and
maintainability reasons. Also sometimes we need different versions of
stuff, as you pointed out in your previous post about CM. We only need
the rare burst of speed and it's very unlikely that any two machines
need that burst of speed at the same time.
Post by Lan Barnes
Doesn't it add up to the same number of instructions per time uint, the
same memory load, the same disk space (except VM should need marginally
more for context switching)?
Yep. Except we can do it all on one motherboard and power supply and
unit of rack space instead of spreading it out over many.
But in Linux/fbsd/etc, I'm still puzzled in how this is superior on
straight servers to just running a bunch of services on one box, and
forget the xen?

I get it in windoz, although running Linux build machines on a VMWare
windoze box (as we do at work) strikes me as throwing out the baby and
keeping the bathwater, and, yes, it does screw us from time to time when
IT pushes a windoze upgrade and crashes the Linux build VM.
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Mike Marion
2008-05-01 05:22:18 UTC
Permalink
Post by Tracy R Reed
Sure. But the vast majority of our servers are idle 99% of the time.
The stuff sits on separate servers mainly for reliability and
maintainability reasons. Also sometimes we need different versions of
A couple other things we've been looking at virtualizion on linux for...

1. Ability to migrate running processes if needed. When you've got
jobs that can run for days if not weeks, this can be a huge gain.

2. Using VMs on top of a standard image across hosts to temporarily
boot previously supported images when required. Hopefully eventually
use grid tools to automate creation/tear-down of said VMs
automatically when jobs need them.

2a. Some of our ISVs (and others) are apparently eyeing possibly
selling their software as self-contained VM instances... this
guarantees the software runs and lowers their support issues with
varying distributions/versions, removes the constant needs for testing
with various versions of libraries, etc.

3. Use of VMs like containers on solaris.. complete isolation of
system resources for jobs.. so even if a program has a bug that makes
it go hog wild on resources (I've seen apps send a 128Gig box OOM
inside of a minute, and the OS could/did not handle it) it won't
effect the other. Limits can do this too, just looking at options and
comparing.

4. Possibly replacing most desktops with thin clients (I personally
think NX is the best choice, and there are clients for it for at least
Wyse and HP thin clients) to a VM so the user can do anything they
want without effecting others, even if they completely trash the box.
#1 comes back in too if we can migrate them if/when needed.

Mostly thinking paravirt vs full virt at this point.
--
Mike Marion-Unix/Linux Admin-http://www.miguelito.org
"In the closed-source world, Version 1.0 means "Don't touch this if
you're prudent."; in the open-source world it reads more like "The
developers are willing to bet their reputations on this." - Eric Raymond
Tracy R Reed
2008-05-01 19:50:04 UTC
Permalink
1. Ability to migrate running processes if needed. When you've got jobs
that can run for days if not weeks, this can be a huge gain.
I use this regularly. I recently decided that the power supplies in my
cpu servers just didn't leave enough margin for safety and were getting
too hot and therefore more likely to fail. So I migrated the vm's off of
each machine replacing the power supply as I went and then migrating the
vm's back. Nobody knew that all of the cpu's just had a power cycle.
4. Possibly replacing most desktops with thin clients (I personally
think NX is the best choice, and there are clients for it for at least
I am looking at this also. I've been playing with NX off and on for the
last year or so. Does LTSP have NX integrated yet? I know they were
working on it a while back.
Mostly thinking paravirt vs full virt at this point.
Definitely. The performance hit with paravirt is small enough to be
negligible for all of my applications.
Doug LaRue
2008-05-01 20:03:46 UTC
Permalink
** Reply to message from Tracy R Reed <***@ultraviolet.org> on Thu, 01 May
2008 14:50:01 -0700
Post by Tracy R Reed
I am looking at this also. I've been playing with NX off and on for the
last year or so. Does LTSP have NX integrated yet? I know they were
working on it a while back.
Don't know but I'm always seeing talk of using ssh tunneling to gain
performance comparable to NX. I've also seen people mentioning running
NX on edubuntu. Hardy is so new that I've not seen much on what's going
on with LTSP in Edubuntu. That's another VM experiment I need to get back
to. I have a 2nd network card in my server connected to Edubuntu in
a VM running LTSP. Worked great on the test client but I've not snaked
the new Cat5 wire out to the distribution area for regular client usage.

I loved the ability NX has to stop a session mid-stream and then pick it up on
another client. It would be nice if LTSP eventually gets that if it's not there
already.

Doug
Mike Marion
2008-05-01 23:19:28 UTC
Permalink
Post by Doug LaRue
Don't know but I'm always seeing talk of using ssh tunneling to gain
performance comparable to NX. I've also seen people mentioning running
Heh.. lately I've been making my NX connections to work via an ssh
tunnel (got tired of the vpn options) so my NX session is basically
going through ssh inside ssh.

While ssh tunnels will allow much the same connectivity, you won't
have the session reconnection ability.. unless you're ssh tunneling
vnc or something. I tend to use the same session from home for weeks
on end, reconnecting when needed.

One big advantage we can see with something like NX with clients in
thin client hw at work is the ability for an engineer to go home and
reconnect to the same login they just left. Sunray has some of this,
but via a sunray at home with either a hw vpn box (pre sunray2) or a
newer runray with vpn client built in. However that adds cost. Sun
tried to write a sunray software client a couple years ago, but it was
unusably slow (big shocker, it was also java, which was probably part
of the problem). They supposedly have done somewhat better, but
really, why bother at this point? Still uses shared display servers
that even 1 user running just about anything on the server itself will
cause slowdowns for everyone else connected to it. I'd rather
segregate each user to their own virtual box that they can pound the
crap out of.

NX would allow anyone with a computer at home (and I doubt we'd have
more then a couple employees without one) to just run the NX software
client at home. Or if they have a company laptop, they could use that
at work and home and not even use a hw thin client.
--
Mike Marion-Unix/Linux Admin-http://www.miguelito.org
Vir: "Londo, are you deliberatly trying to drive me insane?"
Londo: "The Universe is already mad. Anything else would be redundant."
==> Babylon 5
James G. Sack (jim)
2008-05-02 04:16:12 UTC
Permalink
Post by Mike Marion
Post by Doug LaRue
Don't know but I'm always seeing talk of using ssh tunneling to gain
performance comparable to NX. I've also seen people mentioning running
Heh.. lately I've been making my NX connections to work via an ssh
tunnel (got tired of the vpn options) so my NX session is basically
going through ssh inside ssh.
Ummm, ssh inside ssh?

Reminds me that I've heard that NX has some performance advantages --
guess I'd like to find out some more about it. You have any suggestions
for where I should start?
Post by Mike Marion
..
Regards,
..jim
Gregory K. Ruiz-Ade
2008-05-02 14:57:50 UTC
Permalink
Post by Tracy R Reed
Definitely. The performance hit with paravirt is small enough to be
negligible for all of my applications.
anyone have a link that explains, for stupid people like me, the
differences between paravirt and full virt on xen, including all the
limitations of the two?

TIA,

Gregory
--
Gregory K. Ruiz-Ade <***@unnerving.org>
OpenPGP Key ID: EAF4844B keyserver: pgpkeys.mit.edu
Mike Marion
2008-05-02 16:12:57 UTC
Permalink
Post by Gregory K. Ruiz-Ade
anyone have a link that explains, for stupid people like me, the
differences between paravirt and full virt on xen, including all the
limitations of the two?
Quick and dirty (I might be wrong, but this is how I understand it):
- Paravirt - Now hw support required. Quest OSes must know they're
running inside paravirt host (i.e. linux with xen kernel). Has speed
advantages over full because it's basically like an OS inside
processes and so the main/dom0 install is basically just switching
processes. You also cannot cross bitness (i.e. no 32bit guest OS on
64bit dom0 OS.. yet anyway)

- Full virt - Requires HW support (i.e. AMD Pacifica or Intel VTx
extensions). Advantages are the guest OS doesn't even know it's being
virtualized. Disadvantage is more of a speed hit as the hw basically
has to context switch out all the info (registers and such) for each
switch between virtual hosts (vs the paravirt being done by OS).

I could be wrong on the speed diffs, but everything I've read is that
paravirt has a definite speed advantage (in most if not all cases) but
because the guest needs to know it's being paravirtualized, that's the
one big negative most people see. If you're only virtualizing linux
on linux, or any paravirtualizable OS.. then it's not a problem.

As a side note, I've had a host on slicehost for several months now
(basically my dns and MX server) that's nothing but a xen instance.
It's been rock solid and has performed beautifully. You'd never know
you weren't on your own box if you didn't know what the -xen in the
kernel meant.
--
Mike Marion-Unix/Linux Admin-http://www.miguelito.org
Commentator in "Triple Play Baseball" for PS2: "The key to scoring runs is
cashing in when you're in scoring position."
-- Thank you, Captain Obvious!
Mike Marion
2008-05-02 16:18:52 UTC
Permalink
Post by Mike Marion
- Paravirt - Now hw support required. Quest OSes must know they're
*sigh*... s/Now/No
--
Mike Marion-Unix/Linux Admin-http://www.miguelito.org
Whoever came up with the idea of sending email as HTML should be shot,
hung, drowned, poisoned, eviscerated, decapitated, drawn and quartered,
burned at the stake, impaled, crushed, flayed, asphyxiated, and sodomized
with a three-foot-long, foot-diameter jagged, red-hot poker. All at the
same time.
John Oliver
2008-05-02 17:30:39 UTC
Permalink
Post by Mike Marion
Post by Mike Marion
- Paravirt - Now hw support required. Quest OSes must know they're
*sigh*... s/Now/No
And s/Quest/Guest/g ? :-D
--
***********************************************************************
* John Oliver http://www.john-oliver.net/ *
* *
***********************************************************************
Carl Lowenstein
2008-05-02 18:45:00 UTC
Permalink
Post by John Oliver
Post by Mike Marion
Post by Mike Marion
- Paravirt - Now hw support required. Quest OSes must know they're
*sigh*... s/Now/No
And s/Quest/Guest/g ? :-D
But not if you work at Gualcomm.

carl
--
carl lowenstein marine physical lab u.c. san diego
***@ucsd.edu
Alan
2008-05-02 17:30:25 UTC
Permalink
Post by Mike Marion
As a side note, I've had a host on slicehost for several months now
(basically my dns and MX server) that's nothing but a xen instance.
It's been rock solid and has performed beautifully. You'd never know
you weren't on your own box if you didn't know what the -xen in the
kernel meant.
I keep considering them, but the size limits are killer.
10GB for the smallest instance just isn't enough.

-ajb
Michael J McCafferty
2008-05-02 17:45:50 UTC
Permalink
Um... so buy a bigger one !

Do you mean that you were hoping to get more than 10GB of disk for
$20/mo ? It looks to be the going rate... linode.com is the same.
Post by Alan
Post by Mike Marion
As a side note, I've had a host on slicehost for several months now
(basically my dns and MX server) that's nothing but a xen instance.
It's been rock solid and has performed beautifully. You'd never know
you weren't on your own box if you didn't know what the -xen in the
kernel meant.
I keep considering them, but the size limits are killer.
10GB for the smallest instance just isn't enough.
-ajb
--
************************************************************
Michael J. McCafferty
Principal, Security Engineer
M5 Hosting
http://www.m5hosting.com

You can have your own custom Dedicated Server up and running today !
RedHat Enterprise, CentOS, Ubuntu, Debian, OpenBSD, FreeBSD, and more
************************************************************
Alan
2008-05-02 18:12:10 UTC
Permalink
Post by Michael J McCafferty
Um... so buy a bigger one !
Do you mean that you were hoping to get more than 10GB of disk for
$20/mo ? It looks to be the going rate... linode.com is the same.
I'm willing to spend $20 to muck about with a VPS if I can use in the same
manner I use my current host.
Since I'm using roughly 18gb at my current place, that puts me in the $38
package at slicehost, soon to be at the $70 one.
It's just not economical for my cheap ass! heh.


-ajb
Michael J McCafferty
2008-05-02 18:22:16 UTC
Permalink
Post by Alan
Post by Michael J McCafferty
Um... so buy a bigger one !
Do you mean that you were hoping to get more than 10GB of disk for
$20/mo ? It looks to be the going rate... linode.com is the same.
I'm willing to spend $20 to muck about with a VPS if I can use in the same
manner I use my current host.
Since I'm using roughly 18gb at my current place, that puts me in the $38
package at slicehost, soon to be at the $70 one.
It's just not economical for my cheap ass! heh.
Who is your current host ? Is your current plan <$38/mo ?
--
************************************************************
Michael J. McCafferty
Principal, Security Engineer
M5 Hosting
http://www.m5hosting.com

You can have your own custom Dedicated Server up and running today !
RedHat Enterprise, CentOS, Ubuntu, Debian, OpenBSD, FreeBSD, and more
************************************************************
Alan
2008-05-02 18:27:44 UTC
Permalink
Post by Michael J McCafferty
Who is your current host ? Is your current plan <$38/mo ?
Yes, but it's dreamhost shared hosting and not all that great.
However, it's $7.95 a month.

-ajb
Michael J McCafferty
2008-05-02 18:12:29 UTC
Permalink
Um... so buy a bigger one !

Do you mean that you were hoping to get more than 10GB of disk for
$20/mo ? It looks to be the going rate... linode.com is the same.
Post by Alan
Post by Mike Marion
As a side note, I've had a host on slicehost for several months now
(basically my dns and MX server) that's nothing but a xen instance.
It's been rock solid and has performed beautifully. You'd never know
you weren't on your own box if you didn't know what the -xen in the
kernel meant.
I keep considering them, but the size limits are killer.
10GB for the smallest instance just isn't enough.
-ajb
--
************************************************************
Michael J. McCafferty
Principal, Security Engineer
M5 Hosting
http://www.m5hosting.com

You can have your own custom Dedicated Server up and running today !
RedHat Enterprise, CentOS, Ubuntu, Debian, OpenBSD, FreeBSD, and more
************************************************************
John Oliver
2008-05-02 17:33:03 UTC
Permalink
Post by Mike Marion
Post by Gregory K. Ruiz-Ade
anyone have a link that explains, for stupid people like me, the
differences between paravirt and full virt on xen, including all the
limitations of the two?
- Paravirt - Now hw support required. Quest OSes must know they're
running inside paravirt host (i.e. linux with xen kernel). Has speed
advantages over full because it's basically like an OS inside
processes and so the main/dom0 install is basically just switching
processes. You also cannot cross bitness (i.e. no 32bit guest OS on
64bit dom0 OS.. yet anyway)
- Full virt - Requires HW support (i.e. AMD Pacifica or Intel VTx
extensions). Advantages are the guest OS doesn't even know it's being
virtualized. Disadvantage is more of a speed hit as the hw basically
has to context switch out all the info (registers and such) for each
switch between virtual hosts (vs the paravirt being done by OS).
Then how about VMware Server? It does not require special hardware,
nor does the OS require special drivers.
--
***********************************************************************
* John Oliver http://www.john-oliver.net/ *
* *
***********************************************************************
Doug LaRue
2008-05-02 18:23:13 UTC
Permalink
** Reply to message from John Oliver <***@john-oliver.net> on Fri, 2 May
2008 12:33:00 -0700
Post by John Oliver
Then how about VMware Server? It does not require special hardware,
nor does the OS require special drivers.
follows what was said about "full virt" in that you lose speed. But, if you
load special drivers, you can get some of that back. Like using VMware
tools to get graphics, network, etc performance boosts.

Before the CPUs had this kind of virtualization support, you'd see a
good hit for virtualizing and possibly in the 20%-30% range. Now,
with the CPU supporting this and your virtualization platform using
those CPU services, your plain-old virtual machines only see around
10% performance hits. Or so I've heard.

Doug
Andrew Lentvorski
2008-05-02 19:11:14 UTC
Permalink
Post by John Oliver
Then how about VMware Server? It does not require special hardware,
nor does the OS require special drivers.
Well, actually it kinda does the moment you want to touch hardware with
anything approaching reasonable speed.

But, VMWare does full emulation ala QEmu. It emulates *everything* in
software. If you can do that, you can play a lot of tricks to make
things faster and VMWare has a lot of tricks.

Even so, it's noticeably slower than a native OS on the same hardware.
The issue is that I normally don't care. I'm running VMWare because I
need Windows or a specific Linux instance. In those cases, I'm happy to
pay the speed penalty to not have to go configure a separate piece of
hardware for something that I'm only likely to need for a specific task.

-a
Brinkley Harrell
2008-05-02 22:52:02 UTC
Permalink
Post by Andrew Lentvorski
Post by John Oliver
Then how about VMware Server? It does not require special hardware,
nor does the OS require special drivers.
Well, actually it kinda does the moment you want to touch hardware
with anything approaching reasonable speed.
But, VMWare does full emulation ala QEmu. It emulates *everything* in
software. If you can do that, you can play a lot of tricks to make
things faster and VMWare has a lot of tricks.
Even so, it's noticeably slower than a native OS on the same hardware.
The issue is that I normally don't care. I'm running VMWare because I
need Windows or a specific Linux instance. In those cases, I'm happy
to pay the speed penalty to not have to go configure a separate piece
of hardware for something that I'm only likely to need for a specific
task.
There are several things that you can do to speed up VmWare's functions.
First of all and most importantly, memory is crucial to using VmWare on
any of the computers.

The next thing that helps is to allocate the virtual machines to run
with their unique physical partitioning instead of using the standard
disk file emulation. This decreases the actual disk manipulation
overhead significantly. Another feature requires placing these physical
partitions for your virtual machines on separate disk controllers. USB
disks don't work as well as separate physical disk internal controllers.

Of course, you rapidly stretch beyond basic PC architecture. I am
running 16 images on a Dell PowerEdge 2850 with 4 cores and 16GB of
memory using CentOS v5.1 i386 with PAE extensions. The multi-interface
network connections are a boon as well. In this computer's case, I am
running Seagate Ultra-SCSI in a RAID-5 architecture (these disks are
15,000 RPM).

The next logical extension from here would be to move to VmWare's ESX
server and run a minimal hypervisor architecture and maximize the vm's
footprint space.
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Brinkley Harrell
http://www.fusemeister.com
Tracy R Reed
2008-05-03 03:30:18 UTC
Permalink
Post by Brinkley Harrell
The next logical extension from here would be to move to VmWare's ESX
server and run a minimal hypervisor architecture and maximize the vm's
footprint space.
For those who do not already know this is what xen is by default: a
hypervisor.
--
Tracy R Reed Read my blog at http://ultraviolet.org
Key fingerprint = D4A8 4860 535C ABF8 BA97 25A6 F4F2 1829 9615 02AD
Non-GPG signed mail gets read only if I can find it among the spam.
Andrew Lentvorski
2008-05-03 05:00:30 UTC
Permalink
Post by Tracy R Reed
Post by Brinkley Harrell
The next logical extension from here would be to move to VmWare's ESX
server and run a minimal hypervisor architecture and maximize the vm's
footprint space.
For those who do not already know this is what xen is by default: a
hypervisor.
Now, I don't know what "minimal" means, but Xen struck me as being far
from qualifying as minimal.

-a
Brad Beyenhof
2008-05-03 05:31:05 UTC
Permalink
Post by Tracy R Reed
Post by Brinkley Harrell
The next logical extension from here would be to move to VmWare's ESX
server and run a minimal hypervisor architecture and maximize the
vm's footprint space.
For those who do not already know this is what xen is by default: a
hypervisor.
Now, I don't know what "minimal" means, but Xen struck me as being far from
qualifying as minimal.
I don't think the connection was being made between "xen" and
"minimal;" just "xen" and "hypervisor." A hypervisor is technically
just a system of virtualizing the processor at the hardware level and
not from within an operating system. Some amount of processor/RAM is
taken up by the hypervisor itself, but much less than attempting to
run a full OS as a virtualization host.

VMWare ESX (and especially ESXi) bill themselves as "bare metal"
hypervisors, and attempt to provide resources to multiple guests while
requiring as little power themselves as possible. ESXi purports to
need only 32MB of RAM.

A couple of years ago, when Intel was first coming out with chips that
support hardware virtualization, there were actually a rash of
rootkits that installed themselves as hypervisors so that they were
supposedly unable to be seen from inside a running OS. The most
famous, and first successful, of these exploits was called "Blue Pill"
(so named, apparently, because it kept the machine within an
undetectable "Matrix"). From what I heard and read at that time, it
appeared to have affected Windows primarily or even solely.

By the way, since Linode has been mentioned recently: my personal
server is a VPS from Linode, and I've really liked it. Linode
currently uses a homegrown version of UML, but they just completed
testing for Xen and will be rolling it out to their hosts over the
next several months.
--
Brad Beyenhof http://augmentedfourth.com
Have the courage to be ignorant of a great number of things, in order to
avoid the calamity of being ignorant of everything.
~ Sydney Smith, English essayist and preacher (1771-1845)
Tracy R Reed
2008-05-03 07:27:30 UTC
Permalink
Post by Andrew Lentvorski
Now, I don't know what "minimal" means, but Xen struck me as being far
from qualifying as minimal.
I'm not sure what they could really take out that I wouldn't miss as a
feature.

[***@home ~]$ ls -la /boot/vmlinuz-2.6.18-xen
-rw-r--r-- 1 root root 1478167 2007-06-13 22:33 /boot/vmlinuz-2.6.18-xen
[***@home ~]$ ls -la /boot/xen-3.0.2.gz
-rw-r--r-- 1 root root 246927 2006-04-09 18:38 /boot/xen-3.0.2.gz

At least it is only 17% the size of my fully modularized Linux kernel.
--
Tracy R Reed Read my blog at http://ultraviolet.org
Key fingerprint = D4A8 4860 535C ABF8 BA97 25A6 F4F2 1829 9615 02AD
Non-GPG signed mail gets read only if I can find it among the spam.
Andrew Lentvorski
2008-05-04 23:57:02 UTC
Permalink
Post by Tracy R Reed
Post by Andrew Lentvorski
Now, I don't know what "minimal" means, but Xen struck me as being far
from qualifying as minimal.
I'm not sure what they could really take out that I wouldn't miss as a
feature.
-rw-r--r-- 1 root root 1478167 2007-06-13 22:33 /boot/vmlinuz-2.6.18-xen
-rw-r--r-- 1 root root 246927 2006-04-09 18:38 /boot/xen-3.0.2.gz
At least it is only 17% the size of my fully modularized Linux kernel.
Linux guys love the kernel size game.

Okay, now include the userland that Xen requires that you keep around
along with things that require workarounds like mounting the install CD
via loopback NFS because you need special block drivers for Xen but
*only have them for Linux*.

Xen is fine for Linux in Linux (but only for certain favored flavors).
Nothing else.

Yes, I'm grouchy about this. And, yes, I'm slagging on Xen. Somebody
needs to offset the hype. Xen promises much and delivers far less.

The problem is that Xen has left the realm of open source and is now
firmly in the realm of marketechture.

-a
Tracy R Reed
2008-05-05 02:30:36 UTC
Permalink
Post by Andrew Lentvorski
Linux guys love the kernel size game.
Ouch. Nice way to avoid discussing the numbers I presented and jump
dangerously close to ad hominem.
Post by Andrew Lentvorski
Okay, now include the userland that Xen requires that you keep around
along with things that require workarounds like mounting the install CD
via loopback NFS because you need special block drivers for Xen but
*only have them for Linux*.
We are commenting on the use of the phrase "minimal hypervisor". The
hypervisor is different from the userland. Who cares how big the
userland is? You don't need special block drivers for xen. I can point
xen at my iso or /dev/cdrom just fine without involving any special
block device drivers.

And why do you think the xen guys are obligated to code for your favored
OS? Why don't the people who are interested in your OS write the code?
Surely they would be much better at it.
Post by Andrew Lentvorski
Xen is fine for Linux in Linux (but only for certain favored flavors).
Nothing else.
Having actually run a lot of stuff under xen I have to disagree.
--
Tracy R Reed Read my blog at http://ultraviolet.org
Key fingerprint = D4A8 4860 535C ABF8 BA97 25A6 F4F2 1829 9615 02AD
Non-GPG signed mail gets read only if I can find it among the spam.
Andrew Lentvorski
2008-05-05 03:47:20 UTC
Permalink
Post by Tracy R Reed
We are commenting on the use of the phrase "minimal hypervisor". The
hypervisor is different from the userland. Who cares how big the
userland is? You don't need special block drivers for xen. I can point
xen at my iso or /dev/cdrom just fine without involving any special
block device drivers.
Only because you're in Linux accessing to a Linux, so the call passes
through. The block driver basically doesn't have to do much translation
so is easy to write. Xen relies on the fact that it has a compatible
DomU that can handle a lot of the translation in order to minimize the
amount of translation in the hypervisor itself.

This is the reason why Xen has been such a PITA for most of the stuff
I've been trying.

Try it on OpenSolaris Dom0 for an eye-opener. You can't install from a
CDROM because the block drivers don't exist that will allow access to
the CDROM (thus the loopback NFS mount workaround). This is because you
can't just "pass through" a device API call from a DomU to a Dom0 like
you can from Linux-to-Linux.

And, before you go blaming OpenSolaris too much, this is an x86 machine
so it has the standard BIOS, etc. that could be used for basic install
work. But, of course, Xen's real-mode emulation is broken so that
doesn't work either. Oops.
Post by Tracy R Reed
And why do you think the xen guys are obligated to code for your favored
OS? Why don't the people who are interested in your OS write the code?
Surely they would be much better at it.
They're not under any such obligation. However, if they choose that
path, then they don't get to claim to be a real virtualization solution.
Nor do they get to claim to be "open source" when you need their $1000
drivers to make Windows work.

You don't get to claim all the buzzwords and then not deliver and not
get slagged for it.

I'm wondering if, perhaps, we aren't looking at Xen through different
perspectives. I note that you tend to use the 3.0 Xen series. Is that
before or after they got bought? You may be looking at Xen through
rose-colored glasses before their corporatization and expecting too
little, while I may be looking at Xen through black-colored glasses
after the marketechture hype moved in (and having previously used
VMWare) and expecting far too much.

I am *not* a happy puppy with Xen. It's been a long time since I have
been quite this unhappy about an OS implementation/feature. The last
time was probably when I finally had enough of crappy Linux video
drivers and changed to OS X. I expect open source OS breakage; it goes
with the territory. I've even written my own device drivers when
required. However, this one has really ticked me off. I haven't quite
figured out why.

I think it's probably the fact that I feel a lack of quality. I'm about
to trust my server to a bunch of folks who can't keep something working
that they had working a version ago? The FreeBSD guys didn't bother
trying to be a DomU because the API was in such flux that it wasn't
worth their time. Is that something I really am going to trust? The
folks writing Xen have pulled their Windows drivers closed-source. Am I
comfortable with a company that is clearly prioritizing Windows users to
that extent? How do I know that Xen won't pull the same stunt as VMWare
and make their extra driver management Windows-only?

I also think that Xen got pulled into the production before it was ready
because the company is trying to prevent VMWare from getting an
entrenched position. Thus, I'm reacting to the fact that Xen is too
technically premature to be garnering the marketing push that it is
getting. I probably shouldn't have looked at it yet.
Post by Tracy R Reed
Post by Andrew Lentvorski
Xen is fine for Linux in Linux (but only for certain favored flavors).
Nothing else.
Having actually run a lot of stuff under xen I have to disagree.
That is your opinion, and it was your opinion and a couple of others
that made me even consider trying Xen in advance of what I normally
would. Normally, I wait until I hear about a technology from one of my
bellwhethers before trying it out. I violated that rule for Xen.

At this point, I *might* recommend it for Linux-on-Linux. Maybe. If
Windows is on the menu, I would steer them at VMWare.

Otherwise, I'd tell them to talk to one of the companies that stuffs 2
or 4 small machines in a 1U case and look at Xen in a couple of years.
It might be a real product by then.

Good software takes 10 years. Xen probably needs 3 more years yet.

-a
Bob La Quey
2008-05-05 04:08:29 UTC
Permalink
Post by Andrew Lentvorski
Good software takes 10 years. Xen probably needs 3 more years yet.
-a
So, not surprisingly, the future is constructed on bad software.
Who wants to be the past?

Answer: Those for whom reliability is God.

Refrain: Reliability pays today's bills. (Thank God, not new not shiny.)

MetaAnswer: But not tomorrow's.

BobLQ
SJS
2008-05-05 04:37:14 UTC
Permalink
Post by Bob La Quey
Post by Andrew Lentvorski
Good software takes 10 years. Xen probably needs 3 more years yet.
So, not surprisingly, the future is constructed on bad software.
Who wants to be the past?
Answer: Those for whom reliability is God.
Refrain: Reliability pays today's bills. (Thank God, not new not shiny.)
MetaAnswer: But not tomorrow's.
I'd say that it pays tomorrow's bills as well.

But you don't get rich paying the bills. You want to zoom to the top
of the heap? You put down your chips and roll the dice.

Most of the time, most of the folks will lose. That's okay. Some of the
time, some of the folks will win, and win big.
--
The glittering spires of the future are built on the bedrock of the past.
Stewart Stremler
Tracy R Reed
2008-05-05 04:45:55 UTC
Permalink
Post by SJS
Most of the time, most of the folks will lose. That's okay. Some of the
time, some of the folks will win, and win big.
They say that 1 in 10 small businesses fail in the first year. What does
that tell me? It tells me that since I've already been involved in a few
failures (I think I can count MP3.com) I'm getting closer to success!

I have also heard a salesman say that every "no" is one step closer to a
"yes".
--
Tracy R Reed Read my blog at http://ultraviolet.org
Key fingerprint = D4A8 4860 535C ABF8 BA97 25A6 F4F2 1829 9615 02AD
Non-GPG signed mail gets read only if I can find it among the spam.
SJS
2008-05-05 04:49:22 UTC
Permalink
Post by Tracy R Reed
Post by SJS
Most of the time, most of the folks will lose. That's okay. Some of the
time, some of the folks will win, and win big.
They say that 1 in 10 small businesses fail in the first year. What does
that tell me? It tells me that since I've already been involved in a few
failures (I think I can count MP3.com) I'm getting closer to success!
I have also heard a salesman say that every "no" is one step closer to a
"yes".
Gambler's fallacy.
--
It seems hardwired into our brains.
Stewart Stremler
chris at seberino.org ()
2008-05-05 21:30:37 UTC
Permalink
Post by SJS
Post by Tracy R Reed
I have also heard a salesman say that every "no" is one step closer to a
"yes".
Gambler's fallacy.
It depends how literally you take the comment. It would be fallacy to think
that if tails comes up on the last 4 coin flips, that somehow there is some
magical force increasing the chances of the next flip being heads.

The meta-point here is that if Tracy and salespeople press on and don't quit
that eventually they'll come up heads.

Chris
SJS
2008-05-05 21:45:15 UTC
Permalink
Post by chris at seberino.org ()
Post by SJS
Post by Tracy R Reed
I have also heard a salesman say that every "no" is one step closer to a
"yes".
Gambler's fallacy.
It depends how literally you take the comment. It would be fallacy to think
that if tails comes up on the last 4 coin flips, that somehow there is some
magical force increasing the chances of the next flip being heads.
The meta-point here is that if Tracy and salespeople press on and don't quit
that eventually they'll come up heads.
Thus demonstrating that fallacies can actually be useful.
--
In for a penny, in for a pound.
Stewart Stremler
David Brown
2008-05-05 04:50:26 UTC
Permalink
Post by Tracy R Reed
I have also heard a salesman say that every "no" is one step closer to a
"yes".
So, salespeople don't understand the gambling fallacy, not too surprising.

It's as bad as the PHB comment to Alice: "Why is the correct solution
always the last thing you try?"

David
Ralph Shumaker
2008-05-11 18:48:27 UTC
Permalink
Post by David Brown
Post by Tracy R Reed
I have also heard a salesman say that every "no" is one step closer
to a "yes".
So, salespeople don't understand the gambling fallacy, not too
surprising.
It's as bad as the PHB comment to Alice: "Why is the correct solution
always the last thing you try?"
Well, duh! Why, after finding the correct solution, would you want to
try something else?
David Brown
2008-05-11 20:16:27 UTC
Permalink
Post by David Brown
It's as bad as the PHB comment to Alice: "Why is the correct solution
always the last thing you try?"
Well, duh! Why, after finding the correct solution, would you want to try
something else?
That was the point of the Dilbert comic, although, since Adams didn't point
it out explicitly, I'm guessing that a lot of people just didn't get it.

He also made a joke about the PHB being upset that 40% of sick days we on
Monday or Friday. He never explained why that was a weird thing to be
concerned about.

I'd actually be surprised if sick days really were spread out evenly.

David
Bob La Quey
2008-05-11 20:49:53 UTC
Permalink
Post by David Brown
Post by David Brown
It's as bad as the PHB comment to Alice: "Why is the correct solution
always the last thing you try?"
Well, duh! Why, after finding the correct solution, would you want to try
something else?
That was the point of the Dilbert comic, although, since Adams didn't point
it out explicitly, I'm guessing that a lot of people just didn't get it.
He also made a joke about the PHB being upset that 40% of sick days we on
Monday or Friday. He never explained why that was a weird thing to be
concerned about.
I'd actually be surprised if sick days really were spread out evenly.
Which is what 40% would suggest. I would bet (is this what you suggest)
that Monday and Friday are more likely than Wed~Thurs.

BobLQ
Bob La Quey
2008-05-12 13:07:27 UTC
Permalink
Post by Ralph Shumaker
Post by David Brown
It's as bad as the PHB comment to Alice: "Why is the correct solution
always the last thing you try?"
Well, duh! Why, after finding the correct solution, would you want to try
something else?
That was the point of the Dilbert comic, although, since Adams didn't point
it out explicitly, I'm guessing that a lot of people just didn't get it.
Chuckle. I took it a completely different way.

My take was "The correct solution is obvious,
but it took you ten trys to find it."

BobLQ
Ralph Shumaker
2008-05-13 11:06:25 UTC
Permalink
Post by David Brown
Post by Ralph Shumaker
Post by David Brown
It's as bad as the PHB comment to Alice: "Why is the correct solution
always the last thing you try?"
Well, duh! Why, after finding the correct solution, would you want
to try something else?
That was the point of the Dilbert comic, although, since Adams didn't point
it out explicitly, I'm guessing that a lot of people just didn't get it.
He also made a joke about the PHB being upset that 40% of sick days we on
Monday or Friday. He never explained why that was a weird thing to be
concerned about.
I'd actually be surprised if sick days really were spread out evenly.
David
Isn't Monday and Friday 40% of the work week? Tuesday, Wednesday, and
Thursday being the other 60%? Sounds like an even distribution to me.
David Brown
2008-05-13 14:45:47 UTC
Permalink
Post by Ralph Shumaker
Post by David Brown
He also made a joke about the PHB being upset that 40% of sick days we on
Monday or Friday. He never explained why that was a weird thing to be
concerned about.
I'd actually be surprised if sick days really were spread out evenly.
Isn't Monday and Friday 40% of the work week? Tuesday, Wednesday, and
Thursday being the other 60%? Sounds like an even distribution to me.
That was supposed to be the joke. But sometimes Dilbert has jokes and
doesn't explain them. A lot of people don't get them. In the joke, the
PHB didn't get that M&F were 40%.

But, I would be surprised if people's sick time were actually evenly
distributed. I would expect to see a little bit more near weekends.

David

Brinkley Harrell
2008-05-03 23:52:59 UTC
Permalink
Post by Andrew Lentvorski
Post by Tracy R Reed
Post by Brinkley Harrell
The next logical extension from here would be to move to VmWare's
ESX server and run a minimal hypervisor architecture and maximize
the vm's footprint space.
For those who do not already know this is what xen is by default: a
hypervisor.
Now, I don't know what "minimal" means, but Xen struck me as being far
from qualifying as minimal.
-a
VmWare's newest ESX server version requires about 32MB of memory and can
actually be run out of firmware. It is a purely minimalist version of
system support providing only the barest interface to the hardware.
Check out the site at:

http://www.vmware.com/products/vi/esx/esx3i.html

for more information.
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Brinkley Harrell
http://www.fusemeister.com
Paul G. Allen
2008-05-03 10:36:07 UTC
Permalink
Post by Brinkley Harrell
The next logical extension from here would be to move to VmWare's ESX
server and run a minimal hypervisor architecture and maximize the vm's
footprint space.
The data center where Greenest Hosts servers are located uses ESX. It's
a managed center so I have limited access to VMWare and no access to the
actual hardware (and now that I'm working in "maintenance mode" and
"we'll call you when something breaks", I'm hardly accessing anything at
all. Until ESX migrates one of our servers.

Then all hell breaks loose. Such a migration happened about two weeks
ago. Suddenly, the web server could not reliably connect to the database
server. After three days of fscking with it, we moved the important web
sites and databases to another data center. Those web sites and the DB
server work perfectly fine.

Another problem is that whenever a server is migrated or rebooted, the
NIC configurations get hosed. We end up with duplicate IPs and aliases
that either don't belong or are incorrect. The config files are present
and correct, but the NICs are always mis-configured and I have to reset
them.

Things seem to work fine again for a while after such a migration, and
then a server gets migrated again, and the cycle repeats. Another admin
friend of mine said he's come across the same problem in the past with ESX.

So, either the admins at the data center are clueless (along with the
VMWare experts they pay to help support them) and are configuring
something wrong in VMWare, or VMWare ESX has serious problems.

PGA
--
Paul G. Allen, BSIT/SE
Owner, Sr. Engineer
Random Logic Consulting Services
www.randomlogic.com
Brinkley Harrell
2008-05-03 23:56:53 UTC
Permalink
Post by Paul G. Allen
Post by Brinkley Harrell
The next logical extension from here would be to move to VmWare's ESX
server and run a minimal hypervisor architecture and maximize the
vm's footprint space.
The data center where Greenest Hosts servers are located uses ESX.
It's a managed center so I have limited access to VMWare and no access
to the actual hardware (and now that I'm working in "maintenance mode"
and "we'll call you when something breaks", I'm hardly accessing
anything at all. Until ESX migrates one of our servers.
Then all hell breaks loose. Such a migration happened about two weeks
ago. Suddenly, the web server could not reliably connect to the
database server. After three days of fscking with it, we moved the
important web sites and databases to another data center. Those web
sites and the DB server work perfectly fine.
Another problem is that whenever a server is migrated or rebooted, the
NIC configurations get hosed. We end up with duplicate IPs and aliases
that either don't belong or are incorrect. The config files are
present and correct, but the NICs are always mis-configured and I have
to reset them.
Things seem to work fine again for a while after such a migration, and
then a server gets migrated again, and the cycle repeats. Another
admin friend of mine said he's come across the same problem in the
past with ESX.
So, either the admins at the data center are clueless (along with the
VMWare experts they pay to help support them) and are configuring
something wrong in VMWare, or VMWare ESX has serious problems.
What you describe is totally atypical for my experiences in VmWare
(Workstation, Server, or ESX server).
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Brinkley Harrell
http://www.fusemeister.com
Paul G. Allen
2008-05-03 10:51:52 UTC
Permalink
Post by Mike Marion
Post by Gregory K. Ruiz-Ade
anyone have a link that explains, for stupid people like me, the
differences between paravirt and full virt on xen, including all the
limitations of the two?
- Paravirt - Now hw support required. Quest OSes must know they're
running inside paravirt host (i.e. linux with xen kernel). Has speed
advantages over full because it's basically like an OS inside processes
and so the main/dom0 install is basically just switching processes. You
also cannot cross bitness (i.e. no 32bit guest OS on 64bit dom0 OS.. yet
anyway)
- Full virt - Requires HW support (i.e. AMD Pacifica or Intel VTx
extensions). Advantages are the guest OS doesn't even know it's being
virtualized. Disadvantage is more of a speed hit as the hw basically
has to context switch out all the info (registers and such) for each
switch between virtual hosts (vs the paravirt being done by OS).
If this is indeed the case, then the x86 architecture is not the best
model for the job. A better architecture would be a memory-to-memory
architecture such as the TI 99000 series of processors were. These
processors were far faster when executing a context switch because all
registers (except for status, program counter, and workspace pointer)
were located in memory. During a context switch, only those three
registers had to be saved to memory, because all others were already there.

In addition, because of the memory-to-memory architecture, memory
manipulation (add, subtract, etc.) was faster because there was no need
to copy the memory content into a register before the calculation was
performed, and no need to copy back to a register before writing the
result back to memory. From memory as the the difference between the two
at the time:


TI add A to B:

Read A into Accum.
Read and add B to Accum.
Write result to memory.


x86 add A to B:

Read A into reg1.
Read B into reg2.
Add reg1 to reg2 with result in reg3.
Write result to memory.

The x86 has an additional step that the TI did not.

Throw in a stack and there's even more overhead (the TI had no stack,
unless the programmer wrote one, which I never found the need for).

PGA
--
Paul G. Allen, BSIT/SE
Owner, Sr. Engineer
Random Logic Consulting Services
www.randomlogic.com
Andrew Lentvorski
2008-05-04 20:34:19 UTC
Permalink
Post by Paul G. Allen
If this is indeed the case, then the x86 architecture is not the best
model for the job.
The x86 model is, in fact, the *worst* architecture for any job let
alone virtualization.

The Alpha architecture was *excellent* for this. Alas, virtualization
didn't hit mainstream before the architecture died.

The POWER architecture is probably the best remaining architecture for
this kind of thing.

-a
Paul G. Allen
2008-05-05 17:05:59 UTC
Permalink
Post by Andrew Lentvorski
Post by Paul G. Allen
If this is indeed the case, then the x86 architecture is not the best
model for the job.
The x86 model is, in fact, the *worst* architecture for any job let
alone virtualization.
I whole heartedly agree. I've never really liked the x86.
Post by Andrew Lentvorski
The Alpha architecture was *excellent* for this. Alas, virtualization
didn't hit mainstream before the architecture died.
I miss my days at DIGITAL. :(

I still have my Alpha, but it's so much slower than today's POS x86
systems. :(
Post by Andrew Lentvorski
The POWER architecture is probably the best remaining architecture for
this kind of thing.
As in Power Rangers? ;)

PGA
--
Paul G. Allen, BSIT/SE
Owner, Sr. Engineer
Random Logic Consulting Services
www.randomlogic.com
Andrew Lentvorski
2008-05-05 18:47:17 UTC
Permalink
Post by Paul G. Allen
Post by Andrew Lentvorski
The POWER architecture is probably the best remaining architecture for
this kind of thing.
As in Power Rangers? ;)
HENSHIN! :)

For those who don't know, POWER is the codename for the IBM architecture
which includes PowerPC at the low end to significantly beefier chips at
the high end.

-a
David Brown
2008-05-05 23:24:07 UTC
Permalink
Post by Andrew Lentvorski
Post by Paul G. Allen
Post by Andrew Lentvorski
The POWER architecture is probably the best remaining architecture for
this kind of thing.
As in Power Rangers? ;)
HENSHIN! :)
For those who don't know, POWER is the codename for the IBM architecture
which includes PowerPC at the low end to significantly beefier chips at the
high end.
The Cell processor in the PS3 is power based. It's not a bad system, but
it is slower than an equivalent clocked modern x86 system. I suspect that
has to do more with cache sizes, and the lack of memory on the PS3. They
probably also keep the design smaller since they are expecting most
applications to be using the other cells for a lot of the grunt work.

David
Andrew Lentvorski
2008-05-07 02:27:15 UTC
Permalink
Post by David Brown
The Cell processor in the PS3 is power based. It's not a bad system, but
it is slower than an equivalent clocked modern x86 system. I suspect that
has to do more with cache sizes, and the lack of memory on the PS3.
Actually, it has to do with the lack of much of the predictive,
superscalar, out-of-order cruft that sits in most modern processors.

Cell is optimized for chewing vertex matricies, not office workloads.

-a
Andrew Lentvorski
2008-04-30 22:22:12 UTC
Permalink
Post by Lan Barnes
Post by Mark Schoonover
On Wed, Apr 30, 2008 at 3:27 PM, Michael J McCafferty
Post by Michael J McCafferty
Nevermind the cheap space... tell me about the power you saved !
Power, and cooling.
I'm confused (in Linux) on how this saves, since I would expect that
throughput, memory, and context switching on VM boxen would _at best_ only
equal running all those services on one box. What am I missing?
Doesn't it add up to the same number of instructions per time uint, the
same memory load, the same disk space (except VM should need marginally
more for context switching)?
Pretty much. However, power dissipation in a computer system is mostly
going to non-work activities. Losses in power supplies, simply keeping
the microprocessor clock grid running, keeping the disks spun up, etc.

The differential between work and non-work is often not that high.
Consequently, making the system work harder moves the waste/useful ratio
in the correct direction.

-a
Lan Barnes
2008-05-01 00:09:54 UTC
Permalink
Post by Andrew Lentvorski
Post by Lan Barnes
Post by Mark Schoonover
On Wed, Apr 30, 2008 at 3:27 PM, Michael J McCafferty
Post by Michael J McCafferty
Nevermind the cheap space... tell me about the power you saved !
Power, and cooling.
I'm confused (in Linux) on how this saves, since I would expect that
throughput, memory, and context switching on VM boxen would _at best_ only
equal running all those services on one box. What am I missing?
Doesn't it add up to the same number of instructions per time uint, the
same memory load, the same disk space (except VM should need marginally
more for context switching)?
Pretty much. However, power dissipation in a computer system is mostly
going to non-work activities. Losses in power supplies, simply keeping
the microprocessor clock grid running, keeping the disks spun up, etc.
The differential between work and non-work is often not that high.
Consequently, making the system work harder moves the waste/useful ratio
in the correct direction.
-a
This is nonsense to me in my just-running-multiple-services scenario. Why
is flogging the box more a good thing if it's already running all the same
services that it would under vm?
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Andrew Lentvorski
2008-05-01 01:16:00 UTC
Permalink
Post by Lan Barnes
Post by Andrew Lentvorski
The differential between work and non-work is often not that high.
Consequently, making the system work harder moves the waste/useful ratio
in the correct direction.
-a
This is nonsense to me in my just-running-multiple-services scenario. Why
is flogging the box more a good thing if it's already running all the same
services that it would under vm?
Ah, you're assuming that all the services are on one box. In that case,
you're right. No power savings and the VM stuff will burn slightly more
power.

However, a lot of the time, this kind of server consolidation is
multi-box. If I have one box running DeadRat 3 kernel 2.4.2e.1897 for
application Foo and I have a separate box running DeadRat 4 kernel
2.4.20f.256 for application Bar and one last box running all of my
normal stuff, then I'm wasting a lot of power over a single machine
running normal stuff with 2 VM instances to handle the specific
applications Foo and Bar.

-a
Lan Barnes
2008-05-01 15:27:43 UTC
Permalink
Post by Andrew Lentvorski
Post by Lan Barnes
Post by Andrew Lentvorski
The differential between work and non-work is often not that high.
Consequently, making the system work harder moves the waste/useful ratio
in the correct direction.
-a
This is nonsense to me in my just-running-multiple-services scenario. Why
is flogging the box more a good thing if it's already running all the same
services that it would under vm?
Ah, you're assuming that all the services are on one box. In that case,
you're right. No power savings and the VM stuff will burn slightly more
power.
However, a lot of the time, this kind of server consolidation is
multi-box. If I have one box running DeadRat 3 kernel 2.4.2e.1897 for
application Foo and I have a separate box running DeadRat 4 kernel
2.4.20f.256 for application Bar and one last box running all of my
normal stuff, then I'm wasting a lot of power over a single machine
running normal stuff with 2 VM instances to handle the specific
applications Foo and Bar.
-a
Then we're both right, my favorite answer.
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Doug LaRue
2008-04-30 20:46:31 UTC
Permalink
** Reply to message from Tracy R Reed <***@ultraviolet.org> on Wed, 30 Apr
2008 15:15:44 -0700
Post by Tracy R Reed
There's more to virtualization than just throwing up boxes.
Virtualization gets me higher availability and decouples my server from
any one piece of physical hardware. And, of course, the cost savings are
huge. I've got 64 servers in just 8u of rackspace thanks to
virtualization. Now I not only save tens of thousands on hardware but I
save around $800/mo from not having to rent a second rack.
I'm not sure what you're doing with the 64 servers but I think what Lan
was saying is that with a robust OS, applications/services/servers are
isolated from each other with memory management built into the OS.
You can run 10 instances of apache with multiple instances of mysql
without having to add another OS wrapper and virutalized hardware
around it.

But if you have 64 servers because you have 64 customers who are
running their software on your server than yup, VMs make it much
easier. The hardware abstraction is an interesting thought and some
of the imaging tools and failover tools would seem to also make
life easier for server farm farmers.

Doug
Tracy R Reed
2008-04-30 21:43:27 UTC
Permalink
Post by Doug LaRue
You can run 10 instances of apache with multiple instances of mysql
without having to add another OS wrapper and virutalized hardware
around it.
Sure. But when it comes time to upgrade that machine or reinstall for
whatever reason such a complicated setup is a huge pain. I am fighting
just that sort of environment moving out of our old datacenter. Imagine
you have a server compromise and have to shut it down and reinstall. The
downtime will be lengthy as you try to restore/reconfigure all of that
functionality.
Post by Doug LaRue
easier. The hardware abstraction is an interesting thought and some
of the imaging tools and failover tools would seem to also make
life easier for server farm farmers.
Failover and availability is needed by anyone who runs applications
which must be reliable. We don't intend to be server farm farmers. We
just end up with lots of mail/dns/db/web/file/etc servers.
Doug LaRue
2008-04-30 22:40:52 UTC
Permalink
** Reply to message from Tracy R Reed <***@ultraviolet.org> on Wed, 30 Apr
2008 16:43:24 -0700
Post by Tracy R Reed
Sure. But when it comes time to upgrade that machine or reinstall for
whatever reason such a complicated setup is a huge pain. I am fighting
just that sort of environment moving out of our old datacenter. Imagine
you have a server compromise and have to shut it down and reinstall. The
downtime will be lengthy as you try to restore/reconfigure all of that
functionality.
Yup, back in the day, we got UNIX updates maybe once a year with a patch
here and there every blue moon. Linux is moving like really fast now and
so is all the software on top. I see your point, shits moving so fast that
keeping unique under pinnings makes it easier to deal with. Make sense.

Doug
Lan Barnes
2008-04-30 19:01:13 UTC
Permalink
Post by Tracy R Reed
Post by Lan Barnes
This is one of those recurring stories, like "Popular Mechanics's" flying
car. Still, with the exception of #7 (Xen --because it has been recently
debunked here), I think she nailed it
Huh? Xen works as advertised and it is great. It is saving my company
tons of money and gives us capabilities we did not have before which
helps us react quickly to our clients needs. I wouldn't say it has been
"debunked" at all.
OK, all you xen bashers from last week -- have at it!
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Joshua Penix
2008-04-30 19:23:25 UTC
Permalink
Post by Lan Barnes
OK, all you xen bashers from last week -- have at it!
Xen has its place -- virtualizing many instances of a supported OS whose kernel can be modified to allow paravirtualization... a.k.a. Linux. It works really well in that case.

Where Xen falls down is in virtualizing other OSes. Windows under Xen works, but the performance is crap unless closed-source paravirtual drivers for network and disk are installed. FreeBSD is apparently not very well tested under Xen, or at least is having trouble keeping up with Xen's rapid development cycle. The BSD problem was the source of the original Xen rant by Andy.

Xen and the entire market of virtualization software is "suffering" from exceptionally rapid growth. This stuff isn't mature yet, and there's lots of work to be done. Most effort is currently being spent on optimizing the software and adding features for the most common implementation situations. Therefore VMware's management tools lean toward being heavily Windows-centric, since its largest installed base is in corporate environments where Windows-based management tools are the norm. Xen is best tested with Linux for both host and guest... the fact that it's open source lets others like Solaris and BSD get in on the fun, but it's up to their developers to keep up with Xen.

As time goes on and the feature sets of all these systems stabilize, then the APIs and documentation will too. Subsequently more development effort will be able to focus on what are currently "edge cases," and virtualization will move toward being an expected feature of every OS and at some point probably fade into the background as something that "just works."
--
Joshua Penix http://www.binarytribe.com
Binary Tribe Linux Integration Services & Network Consulting
Lan Barnes
2008-04-30 19:30:28 UTC
Permalink
Post by Joshua Penix
Post by Lan Barnes
OK, all you xen bashers from last week -- have at it!
Xen has its place -- virtualizing many instances of a supported OS whose
kernel can be modified to allow paravirtualization... a.k.a. Linux. It
works really well in that case.
Where Xen falls down is in virtualizing other OSes. Windows under Xen
works, but the performance is crap unless closed-source paravirtual
drivers for network and disk are installed. FreeBSD is apparently not
very well tested under Xen, or at least is having trouble keeping up with
Xen's rapid development cycle. The BSD problem was the source of the
original Xen rant by Andy.
Xen and the entire market of virtualization software is "suffering" from
exceptionally rapid growth. This stuff isn't mature yet, and there's lots
of work to be done. Most effort is currently being spent on optimizing
the software and adding features for the most common implementation
situations. Therefore VMware's management tools lean toward being heavily
Windows-centric, since its largest installed base is in corporate
environments where Windows-based management tools are the norm. Xen is
best tested with Linux for both host and guest... the fact that it's open
source lets others like Solaris and BSD get in on the fun, but it's up to
their developers to keep up with Xen.
As time goes on and the feature sets of all these systems stabilize, then
the APIs and documentation will too. Subsequently more development effort
will be able to focus on what are currently "edge cases," and
virtualization will move toward being an expected feature of every OS and
at some point probably fade into the background as something that "just
works."
No better way to ruin a conversation than for people who know what they're
talking about to chime in.
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
John Oliver
2008-04-30 19:46:49 UTC
Permalink
Post by Lan Barnes
Post by Joshua Penix
Post by Lan Barnes
OK, all you xen bashers from last week -- have at it!
Xen has its place -- virtualizing many instances of a supported OS whose
kernel can be modified to allow paravirtualization... a.k.a. Linux. It
works really well in that case.
Where Xen falls down is in virtualizing other OSes. Windows under Xen
works, but the performance is crap unless closed-source paravirtual
drivers for network and disk are installed. FreeBSD is apparently not
very well tested under Xen, or at least is having trouble keeping up with
Xen's rapid development cycle. The BSD problem was the source of the
original Xen rant by Andy.
Xen and the entire market of virtualization software is "suffering" from
exceptionally rapid growth. This stuff isn't mature yet, and there's lots
of work to be done. Most effort is currently being spent on optimizing
the software and adding features for the most common implementation
situations. Therefore VMware's management tools lean toward being heavily
Windows-centric, since its largest installed base is in corporate
environments where Windows-based management tools are the norm. Xen is
best tested with Linux for both host and guest... the fact that it's open
source lets others like Solaris and BSD get in on the fun, but it's up to
their developers to keep up with Xen.
As time goes on and the feature sets of all these systems stabilize, then
the APIs and documentation will too. Subsequently more development effort
will be able to focus on what are currently "edge cases," and
virtualization will move toward being an expected feature of every OS and
at some point probably fade into the background as something that "just
works."
No better way to ruin a conversation than for people who know what they're
talking about to chime in.
He's a witch! Burn him!
--
***********************************************************************
* John Oliver http://www.john-oliver.net/ *
* *
***********************************************************************
Doug LaRue
2008-04-30 20:38:12 UTC
Permalink
** Reply to message from Joshua Penix <***@binarytribe.com> on Wed, 30 Apr
2008 14:23:18 -0700 (PDT)
Post by Joshua Penix
As time goes on and the feature sets of all these systems stabilize, then the
APIs and documentation will too. Subsequently more development effort will be
able to focus on what are currently "edge cases," and virtualization will move
toward being an expected feature of every OS and at some point probably fade
into the background as something that "just works."
Virtualization is really a boon to Windows shops and will probably find its way
as
a standard part of the Windows OS( desktop and server ). It is really not needed
so much in the *nix world except for vendors, ISP's, etc who like to keep their
customers server processes quite separate from others. But when the OS has
good memory management and a robust security system, as Lan mentioned,
you can run more than one service on the OS. Windows, well this whole VM
craze started when IBM started kicking but wrapping Windows physical servers
into one mainframe running dozens and dozens of virtual machines. There
was actually a big savings by consolidating all those x86 box into one large
one with a ton of uptime and I/O protection( raid/hotplug/etc ).

FYI, from what I heard, virtualization on the PC platform started when a Russian
company was hired to provide a way to run OS/2 apps with Windows apps. I
believe it was later to become Parallels.

But you know what, virtual machines are also a cool way to install Linux
applications
on Windows. If the application is browser based and has all the management
features in the browser, it's easier to install a VM and be isolated from all
the issues of putting pieces on/in Windows. For instance, if a Windows shop
wants a bug tracking system, a Trac based VM can do the trick and they
won't really know or care it is Linux under the hood.

Doug
Lan Barnes
2008-04-30 21:15:04 UTC
Permalink
On Wed, April 30, 2008 3:38 pm, Doug LaRue wrote:

-snip of utility of VM-

Also consider CM and QA. Over the years, you may have 3 or more separate
build environments (OS, compiler ver, ancillary tools and libs) per
development cycle. You will wnat these to be immediately available, and
for years to come, for reliable rebuildability.

In windoze OR Linux, VM is a godsend to SCM. Before VM we documented it
and prayed no one ever asked us to do a rebuild in later years.
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Andrew Lentvorski
2008-04-30 21:39:12 UTC
Permalink
Post by Lan Barnes
-snip of utility of VM-
Also consider CM and QA. Over the years, you may have 3 or more separate
build environments (OS, compiler ver, ancillary tools and libs) per
development cycle. You will wnat these to be immediately available, and
for years to come, for reliable rebuildability.
Bingo. That's where I'm coming from.

If all I wanted was a mail server, I'd just install FreeBSD and Postfix,
go home, and forget about the box for the next 4 years.

The problem I have is that I have too much software development now that
*DEMANDS* FooLinux 7.9 with Autoblarg 9.7.14 and ...

Apparently the monocultures lesson didn't take ...

The easiest way around this is simply to be able to open up a VM and
install what it needs and compile it. Then you can either strip out the
dependency or grab what you need and destroy the VM.

VMWare is quite good for this--or was until the Windows-only excursion.

Xen, not so much.

-a
Todd Walton
2008-05-01 20:10:35 UTC
Permalink
Post by Andrew Lentvorski
Apparently the monocultures lesson didn't take ...
Not everyone was given that lesson. I think it's unreasonable to
assume that everyone is going to view computing in the same way you
are. Not everyone is as educated as you are, for instance. There's
some extent to which the stratum has to take responsibility for the
things built upon it.

-todd
Andrew Lentvorski
2008-05-01 21:48:10 UTC
Permalink
Post by Todd Walton
Post by Andrew Lentvorski
Apparently the monocultures lesson didn't take ...
Not everyone was given that lesson.
Um, actually I would expect than anyone doing development on Linux did
get that lesson on a daily basis (Windoze, anyone?).

That fact that the Linux development community is now happy to assume
things like "All the Linux are RedHat" simply demonstrates that
programmer crappiness is not confined to Windows.

-a
Gregory K. Ruiz-Ade
2008-05-02 15:07:27 UTC
Permalink
Post by Andrew Lentvorski
That fact that the Linux development community is now happy to
assume things like "All the Linux are RedHat" simply demonstrates
that programmer crappiness is not confined to Windows.
I think it's more a matter that we've reached critical mass in the FL/
OSS [... your teeth?] world, such that we're now attracting the bad-
lazy programmers (as opposed to the good-lazy programmers).

Gregory
--
Gregory K. Ruiz-Ade <***@unnerving.org>
OpenPGP Key ID: EAF4844B keyserver: pgpkeys.mit.edu
Lan Barnes
2008-05-02 15:26:58 UTC
Permalink
Post by Gregory K. Ruiz-Ade
Post by Andrew Lentvorski
That fact that the Linux development community is now happy to
assume things like "All the Linux are RedHat" simply demonstrates
that programmer crappiness is not confined to Windows.
I think it's more a matter that we've reached critical mass in the FL/
OSS [... your teeth?] world, such that we're now attracting the bad-
lazy programmers (as opposed to the good-lazy programmers).
Gregory
There have been bad programmers in Linux since RH 4.1 (which is when I
arrived).
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Gregory K. Ruiz-Ade
2008-05-02 15:59:49 UTC
Permalink
Post by Lan Barnes
There have been bad programmers in Linux since RH 4.1 (which is when I
arrived).
But we like _you_. :D

Gregory
--
Gregory K. Ruiz-Ade <***@unnerving.org>
OpenPGP Key ID: EAF4844B keyserver: pgpkeys.mit.edu
Lan Barnes
2008-05-02 16:02:37 UTC
Permalink
Post by Gregory K. Ruiz-Ade
Post by Lan Barnes
There have been bad programmers in Linux since RH 4.1 (which is when I
arrived).
But we like _you_. :D
Gregory
Fool! Don't you know I archive emails like this?
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Todd Walton
2008-05-02 19:21:14 UTC
Permalink
On Fri, May 2, 2008 at 12:00 PM, Gregory K. Ruiz-Ade
I think it's more a matter that we've reached critical mass in the FL/OSS
[... your teeth?] world, such that we're now attracting the bad-lazy
programmers (as opposed to the good-lazy programmers).
Sort of a larger-target-attracts-malware-attacks argument.

-todd
Mike Marion
2008-05-02 16:17:52 UTC
Permalink
Post by Andrew Lentvorski
That fact that the Linux development community is now happy to assume
things like "All the Linux are RedHat" simply demonstrates that
programmer crappiness is not confined to Windows.
I've actually had reps from some of the big EDA vendors we work with
make references like "suse linux" vs "regular linux." Guess what they
considered "regular?"

Annoys the crap out of me. In the same thread, they were claiming
they had compiled in some feature in the sw in question in the
sparc/solaris and "regular" linux build, but not in the suse one.. for
some dumbass reason.

Don't get me started on their incredibly stupid ass code that can be
broken by future kernel changes to values in /proc/meminfo, doing
things like causing variable overflow that makes a program malloc()
anywhere from 400meg to 980gig! at startup. This was due to a feature
they deliberately did to attempt to malloc all the ram the app thought
it might need at startup... because "it's faster that way." Well duh..
if you don't ever have to deal with memory allocation while you go,
then yeah, you might get a small boost.. but think what would happen
if everyone did that. Firefox: Malloc a gig or so at startup.. just
in case. They actually claimed that the speedup was because it
"helped to guarantee the memory would be contiguous, thus making it
faster," which made me want to beat the snot out of the morons that
clearly have never learned anything about out memory management in
modern unix(like) OSes work.
--
Mike Marion-Unix/Linux Admin-http://www.miguelito.org
Groundskeeper Willie: "What? Have you gone waxie in your pister? I cannot fit
in the wee vent, you croquet playing mint muncher."
Principal Skinner: "Greese yourself up yourself and go in you... you, guff
speaking work slacker!"
Willie: "Oooh. Good comeback." ==> Simpsons.
Andrew Lentvorski
2008-05-02 18:54:36 UTC
Permalink
Post by Mike Marion
Don't get me started on their incredibly stupid ass code that can be
broken by future kernel changes to values in /proc/meminfo, doing things
like causing variable overflow that makes a program malloc() anywhere
from 400meg to 980gig! at startup. This was due to a feature they
deliberately did to attempt to malloc all the ram the app thought it
might need at startup... because "it's faster that way."
As much as I hate EDA companies, I'm going to defend them on this even
though they got the explanation (sorta) wrong.

The issue is not that it is faster for the application.

The issue is that the Linux kernel *lies*.

If you ask the Linux kernel for memory, it immediately returns "Here ya
go, boss." *but doesn't allocate the !@#$ memory* until you touch it.
Because, well, that's faster.

So, you allocate a nice set of pages you think is plenty big enough to
hold your very long, very complicated computation that's going to use
all that memory and halfway through *BOOM*--out of memory. So, your
choices are to touch *every single byte* up front in order to make Linux
actually give you the blasted memory (stupid when I'm actually going to
touch the memory when I use it anyhow), or bash some undocumented things
to make sure that the kernel can and will give you the memory when you
get there.

This kind of stuff is the reason I avoid Linux when I want reliability.
Yeah, it only bites you at the edges, but there are *lots* of edges.
And the Linux community takes its cue from Linus and ignores things that
are 1% fails if they get in the way of "performance" (see async writes
to disk as default for another example).

-a
Doug LaRue
2008-05-02 19:13:08 UTC
Permalink
** Reply to message from Andrew Lentvorski <***@allcaps.org> on Fri, 02 May
2008 13:53:12 -0700
Post by Andrew Lentvorski
This kind of stuff is the reason I avoid Linux when I want reliability.
so what do you use? I was searching for what you were mentioning and
on a Solaris list I see someone asking about an easier way than touching
all of malloc'ed memory because it's not allocated on malloc. So it seems
Solaris does/did this as well as Linux.

BTW, one mention was to "touch" just on block in the memory saying that
was enough and another mentioned using mlock because that forces allocation.
Post by Andrew Lentvorski
And the Linux community takes its cue from Linus and ignores things that
are 1% fails if they get in the way of "performance" (see async writes
to disk as default for another example).
are these kinds of thing kept out of the kernel all together or available with
compiling options? I know you can't please everyone all the time but there
are some things where a compile option keeps those special case people
happy.

Doug
Gus Wirth
2008-05-02 20:09:41 UTC
Permalink
Post by Andrew Lentvorski
Post by Mike Marion
Don't get me started on their incredibly stupid ass code that can be
broken by future kernel changes to values in /proc/meminfo, doing
things like causing variable overflow that makes a program malloc()
anywhere from 400meg to 980gig! at startup. This was due to a feature
they deliberately did to attempt to malloc all the ram the app thought
it might need at startup... because "it's faster that way."
As much as I hate EDA companies, I'm going to defend them on this even
though they got the explanation (sorta) wrong.
The issue is not that it is faster for the application.
The issue is that the Linux kernel *lies*.
If you ask the Linux kernel for memory, it immediately returns "Here ya
Because, well, that's faster.
So, you allocate a nice set of pages you think is plenty big enough to
hold your very long, very complicated computation that's going to use
all that memory and halfway through *BOOM*--out of memory. So, your
choices are to touch *every single byte* up front in order to make Linux
actually give you the blasted memory (stupid when I'm actually going to
touch the memory when I use it anyhow), or bash some undocumented things
to make sure that the kernel can and will give you the memory when you
get there.
This kind of stuff is the reason I avoid Linux when I want reliability.
Yeah, it only bites you at the edges, but there are *lots* of edges.
And the Linux community takes its cue from Linus and ignores things that
are 1% fails if they get in the way of "performance" (see async writes
to disk as default for another example).
You seem to be unaware that you can turn the overcommit behavior off.
The memory allocation behavior is controlled through the
/proc/sys/vm/overcommit_* control, where * is either memory or ratio.

For the memory overcommit, from
<http://www.linuxinsight.com/proc_sys_vm_overcommit_memory.html>

overcommit_memory
Submitted by admin on Wed, 2006-05-31 16:52.

Controls overcommit of system memory, possibly allowing processes to
allocate (but not use) more memory than is actually available.

* 0 - Heuristic overcommit handling. Obvious overcommits of address
space are refused. Used for a typical system. It ensures a seriously
wild allocation fails while allowing overcommit to reduce swap usage.
root is allowed to allocate slighly more memory in this mode. This is
the default.
* 1 - Always overcommit. Appropriate for some scientific applications.
* 2 - Don't overcommit. The total address space commit for the
system is not permitted to exceed swap plus a configurable percentage
(default is 50) of physical RAM. Depending on the percentage you use, in
most situations this means a process will not be killed while attempting
to use already-allocated memory but will receive errors on memory
allocation as appropriate.

So to turn off the overcommit behavior you would do:

# echo "2" > /proc/sys/vm/overcommit_memory

Also for the overcommit ratio
<http://www.linuxinsight.com/proc_sys_vm_overcommit_ratio.html>

overcommit_ratio
Submitted by admin on Wed, 2006-05-31 16:55.

Percentage of physical memory size to include in overcommit calculations.

Memory allocation limit = swapspace + physmem * (overcommit_ratio / 100)

swapspace = total size of all swap areas
physmem = size of physical memory in system

Gus
Mike Marion
2008-05-02 21:13:58 UTC
Permalink
Post by Gus Wirth
You seem to be unaware that you can turn the overcommit behavior off.
The memory allocation behavior is controlled through the
/proc/sys/vm/overcommit_* control, where * is either memory or ratio.
We've looked into this ourselves to, but have avoided doing much with
it. We would run into far too many applications dying when they don't
really need to because it's not at all uncommon to see some of these
apps have a VM size that's massively larger then the resident size
(and often the VM is more then the RAM in the system).

Some samples I just snagged...

RES VM
7.53G 119.09G
60.53G 109.26G
4.35G 87.91G
4.00G 83.04G

Crazy...
--
Mike Marion-Unix/Linux Admin-http://www.miguelito.org
Do not meddle in the affairs of sysadmins, for they are easy to annoy and have
the root password.
Mike Marion
2008-05-02 21:04:43 UTC
Permalink
Post by Andrew Lentvorski
So, you allocate a nice set of pages you think is plenty big enough to
hold your very long, very complicated computation that's going to use
all that memory and halfway through *BOOM*--out of memory. So, your
choices are to touch *every single byte* up front in order to make
Linux actually give you the blasted memory (stupid when I'm actually
going to touch the memory when I use it anyhow), or bash some
undocumented things to make sure that the kernel can and will give you
the memory when you get there.
That's not what (or why) they're doing though. If it were, they
wouldn't ask for the massive amounts they do without actually
using/testing any. They don't use it.. they just malloc it, in which
case, because of the default linux behavior to overcommit, they won't
know if they'll get 1/2 way and crash anyway when the app actually
goes to use it. The specifically designed this "feature" for
speed... it has nothing to do with reliability.
--
Mike Marion-Unix/Linux Admin-http://www.miguelito.org
Marge: Homer, the plant called. They said if you don't show up tomorrow don't
bother showing up on Monday.
Homer: Woo-hoo! Four-day weekend!
==> Simpsons
Paul G. Allen
2008-05-03 11:06:12 UTC
Permalink
Post by Andrew Lentvorski
Post by Mike Marion
Don't get me started on their incredibly stupid ass code that can be
broken by future kernel changes to values in /proc/meminfo, doing
things like causing variable overflow that makes a program malloc()
anywhere from 400meg to 980gig! at startup. This was due to a feature
they deliberately did to attempt to malloc all the ram the app thought
it might need at startup... because "it's faster that way."
As much as I hate EDA companies, I'm going to defend them on this even
though they got the explanation (sorta) wrong.
The issue is not that it is faster for the application.
The issue is that the Linux kernel *lies*.
If you ask the Linux kernel for memory, it immediately returns "Here ya
Because, well, that's faster.
So, you allocate a nice set of pages you think is plenty big enough to
hold your very long, very complicated computation that's going to use
all that memory and halfway through *BOOM*--out of memory. So, your
choices are to touch *every single byte* up front in order to make Linux
actually give you the blasted memory (stupid when I'm actually going to
touch the memory when I use it anyhow), or bash some undocumented things
to make sure that the kernel can and will give you the memory when you
get there.
Good programming practice (and good security practice) dictates that
when a variable is instantiated, it is initialized to some value. This
includes large blocks of memory that might hold some big blob (to use a
DB term).

So, the way Linux does it is not bad at all, but the way programmers
fail to initialize the memory as soon as it's allocated *is* bad.
Allocate the memory and initialize it when it's allocated, not later on
when you *might* use it. That way, it's there up front, before the long
computation, and there's no surprises half way through. (This is why the
C or C++ compiler warns about uninitialized objects.)

PGA
--
Paul G. Allen, BSIT/SE
Owner, Sr. Engineer
Random Logic Consulting Services
www.randomlogic.com
Lan Barnes
2008-05-03 12:57:02 UTC
Permalink
Post by Paul G. Allen
Good programming practice (and good security practice) dictates that
when a variable is instantiated, it is initialized to some value. This
includes large blocks of memory that might hold some big blob (to use a
DB term).
So, the way Linux does it is not bad at all, but the way programmers
fail to initialize the memory as soon as it's allocated *is* bad.
Allocate the memory and initialize it when it's allocated, not later on
when you *might* use it. That way, it's there up front, before the long
computation, and there's no surprises half way through. (This is why the
C or C++ compiler warns about uninitialized objects.)
Heh ... just flashing back to long long ago when a woman on staff at a
place I worked (very briefly) allocated/made sure she had enough disk
space by writing nulls to a file, one at a time, in a loop. It sure was
initialized! They asked the rest of us if we could punch up the
performance somehow.

Not much we could do. She had a case of ego and was the owner's wife.
--
Lan Barnes

SCM Analyst Linux Guy
Tcl/Tk Enthusiast Biodiesel Brewer
Doug LaRue
2008-05-03 14:25:47 UTC
Permalink
** Reply to message from "Lan Barnes" <***@falleagle.net> on Sat, 3 May 2008
07:56:58 -0700 (PDT)
Post by Lan Barnes
made sure she had enough disk
space by writing nulls to a file, one at a time, in a loop
atleast she used a loop. I saw code a guy did at SPAWAR where he
initialized a data structure with hundreds of lines of copied code.
When asked about it, he said it was easier to read the code that way.

I also saw Java get a bad rep when a developer built a help system in
Java with HTML files but because he terminated the JVM after every help
page, they blew the entire project off as "Java was slow" and recoded
it in C. They wouldn't do C++ because too many there didn't like OOP.
It's a fickle crowd to say the least.

Doug
David Brown
2008-05-03 15:42:46 UTC
Permalink
So, the way Linux does it is not bad at all, but the way programmers fail
to initialize the memory as soon as it's allocated *is* bad. Allocate the
memory and initialize it when it's allocated, not later on when you *might*
use it. That way, it's there up front, before the long computation, and
there's no surprises half way through. (This is why the C or C++ compiler
warns about uninitialized objects.)
Linux allocated memory is "initialized", at least it will always appear
that way as long as there is sufficient memory. Newly allocated memory
will always be initialized to zero.

The thing is, _all_ memory managers used on linux, including malloc,
request memory from the OS in much larger chunks than the user requests.
Syscalls are more expensive than manipulating a few pointers, so it's
better to reduce the number of syscalls.

Most lisp systems request enormous amounts of memory from the system, but
only make use of the memory that they need. It's kind of annoying because
the virtual memory used isn't meaningful. Clozure Common Lisp requests
512GB, and SBCL requests 8GB. But, configuring the OS to force these
allocations to get real memory would prevent either from running.

David
Paul G. Allen
2008-05-04 10:09:05 UTC
Permalink
Post by David Brown
Linux allocated memory is "initialized", at least it will always appear
that way as long as there is sufficient memory. Newly allocated memory
will always be initialized to zero.
It really doesn't matter what Linux or any other OS does, or is supposed
to do. It is bad programming practice (and usually a big mistake, and
always a security risk) to assume that allocated memory is initialized
to zero.

PGA
--
Paul G. Allen, BSIT/SE
Owner, Sr. Engineer
Random Logic Consulting Services
www.randomlogic.com
Doug LaRue
2008-05-04 13:03:22 UTC
Permalink
** Reply to message from "Paul G. Allen" <***@randomlogic.com> on Sun, 04
May 2008 05:06:56 -0700
Post by Paul G. Allen
It is bad programming practice (and usually a big mistake, and
always a security risk) to assume that allocated memory is initialized
to zero.
no need to assume, malloc does not initialize to zero the memory, you
can call memset to do initiate allocation and initialization. But better yet,
calloc reserves the memory but only allocates and initializes the memory
when it is first used. Calloc initializes to zero. The choices are there for
the developer.

And if you don't trust calloc does the initialization then check the source.
Same for memset.

Doug
Andrew Lentvorski
2008-05-04 22:57:23 UTC
Permalink
Post by David Brown
The thing is, _all_ memory managers used on linux, including malloc,
request memory from the OS in much larger chunks than the user requests.
Syscalls are more expensive than manipulating a few pointers, so it's
better to reduce the number of syscalls.
That is true. However, if memory allocation efficiency matters, you
request a big hunk once and do it yourself.

IIRC, Firefox is changing over to doing that along with garbage
collecting their own objects.
Post by David Brown
Most lisp systems request enormous amounts of memory from the system, but
only make use of the memory that they need. It's kind of annoying because
the virtual memory used isn't meaningful. Clozure Common Lisp requests
512GB, and SBCL requests 8GB. But, configuring the OS to force these
allocations to get real memory would prevent either from running.
*blink* Okay, that's just *broken*. Way broken.

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=474402
Post by David Brown
This is no new behaviour of SBCL, and rather a implementation limitation
than a bug. SBCL needs to reserve a fixed contiguous address space for
its GC-managed memory.
And known broken. Bad GC implementation.

-a
David Brown
2008-05-05 04:44:34 UTC
Permalink
Post by Andrew Lentvorski
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=474402
Post by David Brown
This is no new behaviour of SBCL, and rather a implementation limitation
than a bug. SBCL needs to reserve a fixed contiguous address space for
its GC-managed memory.
And known broken. Bad GC implementation.
Back when CMU originally wrote it under Mach, it wasn't a big deal, since
they really were just requesting address space. They try to fake it as
much as they can on other platforms, and other than bogus numbers in 'top'
it really does work.

The issue is that they need to be able to require all future allocations in
the garbage collector will be within the same contiguous address space.
They can't have syscalls allocating in the middle of their space or it gets
a lot less efficient to figure out who owns what pointers.

Don't get me started on the !@#$ they have to do on x86 just to deal with
the lack of registers.

CCL decided that they're finally going to support x86, but only SSE3, so
that they can use the SSE registers. The code generator divides up the
registers and uses one set for managed pointers and the other set for
unmanaged pointers. It makes the tree traversal in the GC a lot simpler,
and faster.

David
Andrew Lentvorski
2008-05-04 22:18:43 UTC
Permalink
Post by Paul G. Allen
So, the way Linux does it is not bad at all, but the way programmers
fail to initialize the memory as soon as it's allocated *is* bad.
Allocate the memory and initialize it when it's allocated, not later on
when you *might* use it. That way, it's there up front, before the long
computation, and there's no surprises half way through. (This is why the
C or C++ compiler warns about uninitialized objects.)
You do embedded software? Right? And *still* make that statement?

Programmers like you are the reason that all of the Moore's law gains go
directly into the trash, programs start up like sludge, and "Please
wait..." screens are the norm.

It is *not* required to always initialize large hunks of memory and may,
in fact, change a very nice O(log n) computation which the computer can
happily chew through quickly into a O(n) computation that moves like
molasses.

In addition, when I am allocating a whopping chunk of memory, I'm
generally *about index it and put something in it myself*. Like
computed data. Or bitvector tracked objects. Or ... And your silly
initialize loop just blew out all of my cache lines for any data I had
previously.

Finally, if you didn't always have 0 initialized memory, you'd actually
notice the buffer and sentinel under and overruns from uninitialized
values and have to *fix* them rather than "Hey, runs on Linux. What's
your problem?".

Initialize memory when it makes sense.

Yeah, sometimes you need to initialize memory up front. Sometimes it
doesn't matter, so do it anyhow (The computation is O(n log n) so the
memory clear is irrelevant). However, the time when you need to
allocate huge chunks is normally *exactly* the time when you actually
need to *think* about that.

-a
Paul G. Allen
2008-05-05 17:37:37 UTC
Permalink
Post by Andrew Lentvorski
Post by Paul G. Allen
So, the way Linux does it is not bad at all, but the way programmers
fail to initialize the memory as soon as it's allocated *is* bad.
Allocate the memory and initialize it when it's allocated, not later
on when you *might* use it. That way, it's there up front, before the
long computation, and there's no surprises half way through. (This is
why the C or C++ compiler warns about uninitialized objects.)
You do embedded software? Right? And *still* make that statement?
Yes, and I can safely say that 100% of my embedded software has never
been released with a bug. Allocating memory to an object in ANY system
without properly initializing it to a known state is BAD BAD BAD.
Especially in a complex system.

I have fixed lots of embedded software written by others that failed to
do this, and in the process fixed bugs in said software. I've also fixed
PC software that failed to do this.
Post by Andrew Lentvorski
Programmers like you are the reason that all of the Moore's law gains go
directly into the trash, programs start up like sludge, and "Please
wait..." screens are the norm.
With a statement like that, it's pointless to discuss things further
with you.

PGA
--
Paul G. Allen, BSIT/SE
Owner, Sr. Engineer
Random Logic Consulting Services
www.randomlogic.com
Andrew Lentvorski
2008-05-05 18:45:36 UTC
Permalink
Post by Paul G. Allen
Post by Andrew Lentvorski
Post by Paul G. Allen
So, the way Linux does it is not bad at all, but the way programmers
fail to initialize the memory as soon as it's allocated *is* bad.
Allocate the memory and initialize it when it's allocated, not later
on when you *might* use it. That way, it's there up front, before the
long computation, and there's no surprises half way through. (This is
why the C or C++ compiler warns about uninitialized objects.)
You do embedded software? Right? And *still* make that statement?
Yes, and I can safely say that 100% of my embedded software has never
been released with a bug. Allocating memory to an object in ANY system
without properly initializing it to a known state is BAD BAD BAD.
Especially in a complex system.
I have fixed lots of embedded software written by others that failed to
do this, and in the process fixed bugs in said software. I've also fixed
PC software that failed to do this.
And I have had to do the exact opposite. I had to remove an
initialization loop on an embedded ARM7 system that wrote zeros to
memory that was being tracked by a bitvector map precisely because
writing to memory was so slow. Initializing the memory was completely
taken care of by initializing the bitvector, yet they wasted huge chunks
of time zeroing it out. Repeatedly.

I have also had to remove initialization from memory systems in which
the data structure was carefully chosen to be O(k) in memory based upon
the number of *intersections* (normally very small) rather than O(n) in
the number of objects (normally very large). Writing all zeros to the
memory which handled the sweepline queue (which had a very nice
implementation that grew into memory gradually in an amortized manner
using sentinels) changed the algorithm from time O(k*n) (k intersections
touched * n objects touched--effectively O(n) in most cases) to O(n^2)
(basically, the algorithm doesn't run). Removing that initialization
made a huge difference.

And, finally, my best counter example to your "write zeros" argument is
sitting in practically every operating system. *Nobody* writes all
zeros to a disk anymore. Ever. They zero out the metadata and leave
the garbage on disk until they overwrite it.

Space does not have to be zeroed to be initialized.

-a
Doug LaRue
2008-04-30 22:31:52 UTC
Permalink
** Reply to message from "Lan Barnes" <***@falleagle.net> on Wed, 30 Apr 2008
16:14:59 -0700 (PDT)
Post by Lan Barnes
Also consider CM and QA. Over the years, you may have 3 or more separate
build environments (OS, compiler ver, ancillary tools and libs) per
development cycle. You will wnat these to be immediately available, and
for years to come, for reliable rebuildability.
Years ago on a UNIX box(HP9000) we had all these on the one box but also
had a ton of nice scripts to allow us to easily build our code and tie it into
any of the environments we needed to. QA did the same and then took the
code to the hardware they'd be testing on( sonar system ).

VMs make it easy if you don't have a system of keeping things in their places.
You can either use the bento box or separate plates for everything. The plates
are cheap and everyone knows what a plate is so it's "easier".
Post by Lan Barnes
In windoze OR Linux, VM is a godsend to SCM. Before VM we documented it
and prayed no one ever asked us to do a rebuild in later years.
In the PC environment, it is nice to have the whole kit-and-kaboodle wrapped
up and ready to be fired up in a moments notice. I've already retired some of
my old systems to VM images on my raid box just so I can some day get to them
to see what I should keep and what should be thrown away. Sure beats keeping
that old hardware around but still being able to work in the old desktop env.
And agreed, when a customer with old old software wants to pay for support,
you can now fire up what they are running to see just what the heck they might
be wanted to have fixed.

Doug
Andrew Lentvorski
2008-04-30 21:44:36 UTC
Permalink
Post by Joshua Penix
Post by Lan Barnes
OK, all you xen bashers from last week -- have at it!
Xen has its place -- virtualizing many instances of a supported OS
whose kernel can be modified to allow paravirtualization... a.k.a.
Linux. It works really well in that case.
Where Xen falls down is in virtualizing other OSes. Windows under
Xen works, but the performance is crap unless closed-source
paravirtual drivers for network and disk are installed. FreeBSD is
apparently not very well tested under Xen, or at least is having
trouble keeping up with Xen's rapid development cycle.
Spoken like a true Cascade of Attention Deficit Teenagers(tm) apologist.

I call bullsh*t.

The issue is that lack of regression testing is considered acceptable.

It is perfectly possible to add features, change architecture, adjust
API's, etc. without breaking the old stuff. But you have to have a test
for it.

And Xen doesn't. That's it in a nutshell.

It's why I'll probably settle on OpenSolaris in spite of its issues. At
least Sun will eventually move xVM into their testing infrastructure.

At which point we'll actually have a stable, open-source virtualization
product.

-a
Doug LaRue
2008-04-30 22:43:26 UTC
Permalink
** Reply to message from Andrew Lentvorski <***@allcaps.org> on Wed, 30 Apr
2008 16:43:12 -0700
Post by Andrew Lentvorski
It's why I'll probably settle on OpenSolaris in spite of its issues. At
least Sun will eventually move xVM into their testing infrastructure.
In my looking around for USB stuff on VirtualBox, I read that Sun should
be moving the non-free features into the open source version over time.
That'll be nice.

Doug
Loading...