Sunday, May 20, 2007

Emails from the Edge; PowerAid...!!!

PowerExecutive software, part of the IBM Systems Director portfolio, will now be available across all IBM systems and storage. Originally designed from IBM BladeCenter and System x, in November 2007 IBM will roll the free energy management technology out across IBM System i, System p, System z and System Storage. It is the only energy management software tool that can provide clients with a view of the actual power used, as opposed to benchmarked power consumption, and can effectively allocate, match and cap power and thermal limits in the data center at the system, chassis or rack level. By enabling power capping, clients can effectively run their systems on cruise control.
(look what we found above that IBM is doing. And on top of that, we get an email about POWER also. Coincidental or what?)

Email from port:


Morrie -

Think of emPath in NOW Solutions projects as gluing Microsoft dotNet stuff and other-party applications and communication/interoperation methods together as an extender across application distance to provide the means necessary to make two functions of different build and nature work together.

That requires enabling or virtualization of a proprietary data store to some method that can communicate across an agnostic connection to all other sources or consumers of the target data (this is the XML data stream that is universally consumed within the framework).

That also requires a management solution to allow designers and operators to marshall all the sensor inputs and modulator outputs along with all the possible dynamic connections the data model and workflow can take.

If emPath can do that for different applications to work as one, the stretch is not difficult to apply that same concept to all the individual server datastores that monitor the "behind the scenes" parameters involved in the board/circuit level operation of servers. Power management on motherboards relies on data being picked up from sensors on the boards and stored onboard in support memory. Various aspects of circuit operation take their cues from this store of onboard support memory as processed by functional code. These functions reside in either embedded board code or external software available via network.

As these board datastores are typically proprietary, they do not mix well with other hardware datastores. However, enabling those stores to a virtualized arbitrary stream (tadaaaa a variant of the XML Enabler Agent method) allows all the boards to report their numbers (voltages, current, temperature, buss speeds, fault sensing, etc) and receive modulation of those values (voltage/current control, cooling augmentation, clock/process optimization, remediation, etc) per outside management software to monitor/regulate the target controls.

The idea of currently unmanaged server farms of hundreds/thousands of servers turned on and left on for lack of a facility to manage granular power management as opposed to an intelligent farm able to modulate the response to power constraints is a poster child for true virtualization.

We think of "power management" as something that drives the screensaver to come on or when "the system" powers the monitor down when not in use. Power management on server circuitry may act in many other more sophisticated ways to shut down parts of the board for idle and keep other parts alive for various needs.

Constructing server/storage/router hardware to provide more granular hardware control should prove to be a new business paradigm in the race for more efficient hardware operations through automation.

We have been watching the IBM POWER thing with empath for a while now and their description of their activities dictates a kind of architecture that is already described by NOW Solutions emPath targeting business applications.

An application is an application. What the application is used for should have no real bearing on how it should be constructed for maximum granularity, availability and facility. A single architectural method is preferred to many various. A reusable architecture is highly preferred over a specifically targeted application envelope and response.

Remember... all is data and any functionality on that data is an application. Size means nothing if an agnostic granular facility with comprehensive real-time distributed processing is provided. From the many successes NOW has had with all the various application packages they must have encountered integrating their client environments over the years, I would say there is no reason to think emPath could not perform what POWER would require.

Since we know Ross Systems was engaged in distributed automation, I see no reason not to assume emPath had already done work in that area. VCSY would be negligent to not recognize the usefulness of the empath model in all other vertical applications. IBM would be negligent in not taking advantage of NOW Solutions empath background in distributed automation.

The narrow thinking of most people is a software designed for one use can only be used there. That has been the most difficult hurdle in educating most people to the predicted results of enablement, virtualization and arbitration. Properly virtualized application structures should be ready for application to any other methods and components.

Why don't I assume POWER empath is an IBM concoction devoid of any VCSY tech? First, IBM has known about VCSY technology since at least 2001. POWER empath is a new development by IBM surfacing during the last few years. I don't see any reason for IBM to muddy their own branding when NOW Solutions emPath is prominent as an IBM partner. I find inferred IBM incompetence here ESPECIALLY difficult to conceive when the architectural descriptions of both "empaths" are the same and their projected requirements, assets and workflows are only different.

That's why "arbitrary" is so important and I don't see anyone on Microsoft's side using that word anywhere close to its true meaning.

IBM's business is changing due to these new technologies making new opportunities and capabilites available quickly and cheaply. Where the industry would have had to add hardware back in the day, we provide software to re-purpose the hardware and bring it in line with evolving criteria and constraints.

A new ability to agnostically interface with hardware (enable the hardware datastores for real-time bidirectional transactional communication with the network/internet) is what IBM was talking about recently in centering their business increasingly to software services and doing so with less hardware dependence.

Where DB2 9 (codename Viper) uses obvious enablement and virtualization to build vendor independence into new and existing database products, POWER offers to build vendor independence into new and existing server installations.

Optimizing legacy hardware is the first predictable IT ROI from virtualization by optimizing every usable processor when it's needed and turning the rest of the unused services off.

Now, extend that idea of hardware control to more sophisticated models and we can see each server farm can be sweeping up its own operating characteristics and saving electrical costs in aggregated server masses. Such operations are the way to increase your software load efficiencies without having to increase your hardware footprint or specs.

There is a world of legacy systems with similar unfound characteristics embedded in utility memory onboard their hardwares. Using virtualization and arbitration gives these legacy systems (new GUI, new Interchange, new Optimization and Regulation, new Governance) a renewed lease on usefulness as the legacy processing powers are most likely adequate for the jobs they've done all along, they simply need a facelift and some brains and dancing skills.

Virtualization not only gives legacy computer farms a quickly modernized software rejuvenation but may allow existing equipment to meet specs written for more modern systems.

Such savings can turn outdated junk yards into golden egg farms.

POWER management with empath interoperation facility would virtually interconnect and route sensors from hardware (we're moving data which the sensors transmit into proprietary code on the hardware or marshalled to local input converters) to the management solutions for handling "the problem", then route the associated data derived by the management solutions back to the board datastores where onboard facilities modulate the circuits for load optimization.

Even control of larger granules may be done using upgraded cooling modules added to legacy enclosures to provide on-off or speed control of server fans. Modulating enclosure temperature has significant impact on the operation of electrical and electronic systems as hotter circuits tend to waste energy by various electronic characteristics such as leakage, induction, capacitance etc.

So, what VCSY technology brings to the table is the ability to marshall all this disparate proprietary information to software that can see it and act on it and to then marshall the actioned data back to the actuating hardware datastores so the derived solutions can be put to work. Empath marshalls and interchanges interconnected data amid the various dispersed and different operating functions (typically office applications in HR but anything may be an application just as everything is data) and I believe it's being used by IBM in their POWER architecture.

I know critics will say there is no way to know this, but there ARE ways to mitigate what we do not know and there are ways to correlate what we do know with language and culture within IBM (if we were talking about empath in Microsoft we would probably be looking for filed off serial numbers). If someone has something ELSE besides NOW Solutions emPath to glue various applications together I would consider perhaps that is in use except there are none out there that I've seen and the branding problems with IBM using empath as a project or system name doesn't play well with IBM's image of professional compartmentalization.

Whoa, look at all the words. Man, I can put out some verbiage, no? heh heh Dude you should see what I can do with a load of specifications. Mad paper skills.

Anyway, hope this adds some possible knowledge and makes some things more clear.

pd

1 comment:

Anonymous said...

IBM work in virtualization is leveraging the IBM mainframe.---Will radically change its future? VCSY represents the Vendors on this board---interesting crowd.

http://www.tpfug.org/Background/officers.htm