I realize these will not qualify as "Layman" explanations as a certain amount of information processing theory must be available to cover the ground.
I will attempt to further simplify although questions can provide a catalyst for reducing complexity and confusion.
Layman's view of VCSY part 2
In order to explain why MSFT is blocked by VCSY patents, I will have to explain what needs to happen (Section 1) and how that happens (Section 2) with VCSY's patent 7076521 aka 521.
I will attempt to provide a “Short Story” version, but, for now, this is the long version.
*****Section 1.*****
>>>>WHY?
To decipher why VCSY patents and patent derived products trump the capabilities of other technologies, we need to examine two key words: Virtual and Arbitrary
“Virtual”? Virtual means “not actual”. “Actual” means the real data being contained in an application or available for direct processing by the application. Essentially “Virtual” means a faithful representation of the actual format. Further from there, virtual means a faithful execution of an actual process in another processing form.
Virtualization is a large buzzword currently come into IT vogue given the public exposure over a coming “battle” between VMWare and Microsoft Viridian and other virtualization products. Virtualization will revolutionize the use of operating systems and applications and bring enhanced value and productivity from new and legacy systems.
A fundamental tenet of virtualization teaches that anything “soft” (as in soft-ware) can be represented as something else soft without changing the fundamental operating qualities. Data is hard reality handled in a soft form without changing the essential “being”.
Data can only be consumed if it is presented in a useful form.
Data may be “native” to a system, meaning the data is readily understood and processed because the application is built to use the data in the presented form.
Or, data may be “foreign” to a system, meaning the data must be processed first into a useful format (form) before the system can then treat that data as though it were native.
An analogy can be seen in the difference between Metric and English measurement. Both are “values” and therefore are “data”.
Both measurement systems are readily understandable to humans. But, unless a means of transforming one value into another via reference table or some means readily, one data format will likely be considered native and useful to the consuming human culture and the other data format will be foreign and unusable.
A system processing native data may produce and consume that data in any way desired by design. There are no boundaries to that processing other than the creative limits of the builders. However, ingesting one small piece of foreign data misapplied will stop the system cold no matter how brilliant the system designers unless they have built into the system a way to handle the unexpected or foreign.
Designing for all possible screw-ups or surprises is what builds cost into a system. Simple is cheap.
Thus, a cheap, simple system processing native data will do well as long as the foreign requirement is never faced. It is not a matter of resources. It is a matter of presentation. Even the most expensive and complex system will become dumb as a fence-post with foreign data.
So, Is the solution X: To learn all possible forms of expression? Or is the solution Y: To provide one universal form of expressing all possible forms? Decades of computing design demonstrates option X is expensive and difficult while option Y is inexpensive and easy.
So, the data between different systems should be standardized into a single universal form so that all applications may apprehend and consume that data without having to modify the established methods built into the proprietary system.
That's a tall order since all cultures consider their proprietary standards to be superior to other cultures.
Fortunately, in the data world, there is a standard that allows this universal presentation and that standard is called XML (for eXtensible Markup Language). Remember this: All XML does as a “programming language” is to represent data as structure and value. Fortunately, as with all elegant solutions, that is all that needs to be done to achieve a basis to achieve computing virtualization.
But, you will find, it is not the expression that virtualizes but the processing available to arrive at that expression that does the virtualization.
If proprietary application A and proprietary application B send each other copies of their respective native data presentations, chances are they won't be able to work with either as any small portion showing up as a foreign article will cause confusion and error or downright failure. Unless the builders built or modify the applications to work with each other using a same data format, interoperation is not possible.
However, if app A and app B can simply transform each of their native forms into XML and back again to native, the XML that meets in the middle will represent a universally understood form any other application with a standard XML capability may process. Thus, instead of having to modify an application to work in a new system, the application may join the system as is and the availed XML allows that application to work with any other application in the universe of “interoperating” apps.
The data in each application can be said to have been “virtualized” or changed from actual to virtual because the XML representation of the data is a transformed instance of the value in a proprietary format and structure. Unless the processing consumes XML directly (something we will cover in examining “arbitrary”), the native form of the data is the actual or real form being processed by the applications. The virtual form of the data is what is being pooled for the entire system to use for communication (again, and/or depending on if applications can process XML directly – we will cover in “arbitrary”).
So, to achieve at least the first level of virtualization, we go from native A to XML to native B. We can, of course, then go from native B to XML to native A. We thus have “electronic data exchange” which is the simplest form of virtualization and a mainstay of XML “programming” in IT systems for the past decade+.
Virtualization as a simple core capability is thus the ability to universally represent the native form of any data in an application.
The XML standard is designed to allow all builders to virtualize data for use by any other builder.
HOWEVER.
XML does not have to be used to virtualize. “Virtualization” per se only requires a standard agreed upon by consenting builders. A builder may use his own standard or a standard agreed upon by other partner companies. This is often done by partnering builders to exclude competitors; a key of sorts letting folks who know the standard to enter the clubhouse and keep others without a key out.
So, even if XML is used, companies can develop various proprietary ways to process the transformation to and from the XML presentation, thus allowing only those applications adhering to the builder's proprietary XML standard to join in processing that data.
This is the case with the Microsoft version of XML for Office application document files called OOXML. Microsoft's variations in the base XML standard is the source of conflict in the international standards community where the ODF standard presents XML that may be processed by ALL applications, whereas the MSFT version of XML has been crafted to exclude non-MSFT application uses.
So, virtualization may be applied in a number of different ways for different purposes.
For example, VMWare uses proprietary methods to virtualize the targeted operating system/application relationship so the OS/application combinations may be used on different operating systems. The virtualization method is proprietary so you can't simply plug your operating system into the VMWare system unless VMWare has crafted the system to handle your OS and apps.
Sun ZFS uses proprietary ways to virtualize the data in the operating system so outside non-Sun applications may communicate with the OS without having access to the proprietary OS command and data bodies. This is a more “granular” or lower level type of virtualization more akin to what VCSY can provide. Sun ZFS does what Microsoft WinFS was supposed to do before WinFS was killed. Sun has a read/write version allowing applications to pass information back and forth between the Sun OS and the application across the virtual path. Apple has a read-only version of ZFS allowing apps to get values from the OS but unable to pass data back to the OS.
Microsoft OOXML allows Microsoft and partner applications to process a virtual representation of the MSOffice document data while preventing non-partnered applications. This is called “lock-in” ensuring the users will have to buy MSFT/partner products to join in the processing interoperation.
If these companies were to convert their applications to present all their operations to the outside world in international standard XML, ALL other applications would have an opportunity to join the party. So, why don't they?
The most prevalent argument has it the vendor doesn't want to commoditize their products allowing them to be accessed and intermixed with other vendor's products. But that seems a lame argument when analyzed against the resulting customer value and the value added to each vendor's products (unless the virtualization uncovers intentional lock-in values built in by a vendor – THAT is the fear).
The next argument has been that virtualization is difficult and that is true if modifications to your current product line are required. THAT is a valid argument and a real problem and is a key reason for using 521.
521 does not require modification to the application.
Why? Here is the biggest kink; Being able to virtualize data is ONLY the first step in the process. The idea is to virtualize the computing process that produces and consumes the data.
Simply being able to export XML and import XML are large steps in application evolution but by no means the end of the road. Without a way of passing information about the data and the process (data called metadata or “data about data”) the applications are flying blind.
Various parts of any body of data being virtualized into XML will likely be used by the applications as values. Other parts to be consumed may be metadata. Still other parts of the body of data may be event triggers, commands and process state parameters to be used by the various applications to make sure all interoperating processes are carried out properly. These elements must all be available as a virtual product for the application interconnection to be conducted properly.
So, since an application must FIRST be able to express the application to the outside world as XML, we now find the biggest current obstacle: Routines to import XML and export XML are not always available and processing data are much less likely to be available.
Although Microsoft (not to pick on MSFT but they are a perfect example – the rest of the world is worse off) is a large company with much resources, only a portion of their products import or export or process XML. That means they must either modify those applications to import/export and internally process for XML or use an outside application the industry refers to broadly as “middleware”.
Legacy systems built before circa 2000 likely do nothing with XML and rely solely on the native data forms to operate. These must have middleware solutions appended to their operations to virtualize as modifying legacy systems are often difficult to impossible.
Modern systems also likely have little or no XML capabilities as the industry has battled with various methods of implementing solutions to the legacy problems (as their software becomes legacy to future XML installations the moment shipped). The least troublesome solution is typically to not provide the capability in the application body itself but to hold off for further development which is likely a middleware solution.
So, how does a middleware solution used with legacy and proprietary applications allow them to be virtualized?
Now we come to the reason for patent 521 and the reason companies like Microsoft and various others are unable to do much more than the simplest interoperating processes using their own technologies between like or dislike applications.
*****Section 2*****
>>>>HOW?
How Doooo they Do it?
The title of 521 is a “Web-based collaborative data collection system”. This title is deceptively simple, as the action described by that title phrase alone allows a wide range of virtualization capabilities.
You may notice the patent describes all the various elements required to form a computer. Usually computers are build as processing assets on a hardware chip. But, 521 is a virtual computer also known as a virtual machine or VM [1].
The concept of a virtual machine is not unique. The architecture of this particular VM IS unique, however, and is key to the novel 521 capabilities. I will be explaining only the simplest operation as that may jog further thinking for other solutions.
The 521 virtual machine is designed to run on a web platform such as a browser, yet able to connect to data in a proprietary application (including an OS since that is an application albeit the underlying “base” or “platform” application between the applications and the bare hardware referred to as “bare metal”).
Because the 521 is typically used to augment and support programming workload on the client, the 521 product is called an “agent”. Actually, an agent is what can be built using 521 claims for 512 is actually the creative ecology for all derivative agent products and claims.
What that means is 521 is what you build 521 products (which are agents) out of.
We will do the simplest use of 521 here before we graduate to larger project capabilities and patented product derivatives which are potentially very many.
So, we have a “web-based” agent which may reside on the browser (or on the OS or on bare metal, but more about that later) which may retrieve proprietary data on the local machine, process that data and present that data as an XML representation.
The XML data may be transported by the agent via http to any other real or virtual computer for further process. It is the http protocol that makes the agent “web-based” and does not limit the agent to residing on the browser.
In our virtualization examples above, the other “target” computer would use a similar [or different – doesn't matter] agent to transform the produced XML into the proprietary data form and place that in the proprietary data store for that client application to work with.
At that point the client could be a anything from a mainframe or a micro-device.
That is essentially one of the simplest “bridge”s you can build and is a base concept that describes one of the 521 children; the “XML Enabler Agent”.
In this capacity, the 521 derived agent can act as an agent for a proprietary application to virtualize any size data body in the application datastore. That virtualization may be expressed as anything (not just XML) as various programming inputs on the MLE, so the proprietary IN, XML or anything OUT is a beginning of a universal virtualizer.
SO, COULD MICROSOFT DO IT DIFFERENTLY?
Sure. They can modify the various individual server-based applications they own to be able to express XML to the client and return received XML back to the proprietary expression on the main server or some other server (what SOAP does). This configuration would preserve their “server-centric” philosophy, but, they would then not be in a position to act as a replacement for the main server should the internet connection between the client and the main server be down. Such “off-line” processing while waiting for the on-line web connection to return, is a significant point of contention by SaaS critics.
Such on-line/off-line capability may only be served by a processing facility at the local client machine. This is what the agent does.
MSFT can provide executive agents of their own at the “end” of RSS pipes at the local client (this would leverage the inbred subscription/transaction activity in RSS) with XML routers to pass messages to various executive elements on the browser or client OS. The problem there is all processing of the proprietary data to XML must necessarily occur remote from the local client at the main server. Therefore, latency between the processed answer and the arrival at the client is exacerbated by no off-line processing capability.
This RSS method is a key component of Ray Ozzie's vision for providing MSFT's software+service distributed computing. While it works with a Remote Procedure Call architecture like SOAP, it does not qualify as “distributed computing” but, rather, distributed services.
The off-line processing vulnerability is the key obstacle inherent in the RSS method. The RSS method still needs a local agent to perform processes while the web is off-line (which renders the RSS silent until connection is resumed).
So, the above two example workarounds show the answer to “Could MSFT do it differently?” is a qualified 'yes, but'. The solution does not meet the requirements of true web application construction and does not accomplish virtualization at the client but requires virtualization at the main server – a method which has been the traditional means of providing web pages since the 1990's.
The main reason VCSY's 521 is superior to MSFT method is found in 521's ability to act anywhere in any configuration under any circumstances. “Proving” that is simply a matter of walking through thought experiments with various architectural configurations being exercised to watch the issues faced by designers. This can be done by 521 against any other vendor methods and 521 holds flexibility, scale, and power advantages throughout. I will gladly provide such thought experiments in greater detail.
I believe the virtualization and client-side server capacity are inherently desirable compared against the traditional RPC method.
The sister to “virtual” is “arbitrary”; a term not heard in current buzz because it has not been cataloged by any mainstream provider. Arbitrary is the keyword for the 744 patent which is the one MSFT is being sued for infringing in .Net.
I think you will see where in .Net such “arbitrary” capability might be needed and where it appears to be or have been.
Footnotes:
[1] Virtual Machine aka VM is a data processing computer that is not actual or “real” but is built out of software and runs using internet protocol (in this discussion) to communicate with other computers whether actual or virtual. The 521 VM is a virtual micro-server able to perform data communications between internet systems using the http protocol. It is thus a web server that runs on the local actual machine. A VM like 521 may be called a “runtime” [3], but 521 calls the VM with accompanying processing resource streams an “agent”.
[2] The 521 VM is comprised of primitive elements that perform disk and other resource I/O operations. These elements may be assembled into a workable application accessing the proprietary resources and functions of the underlying platform (typically the OS running the browser) by invoking the primitives in a dynamic markup script. This is the “program” that builds the executive functions and workflow of the web-based application.
[3] The term “runtime” is a proprietary word denoting the operating executive kernel residing as the first and central element of a VM coupled with a programming language to access the resources connected to the kernel. Typical of such a description is Microsoft's Common Language Runtime, which is the means of actioning Windows, and the new Dynamic Language Runtime, which is designed to operate on the web and run a markup-based programming language.
[4] VCSY's “dynamic markup language” Emily (the executive kernel is the Markup Language Executive) was introduced by VCSY in 2000. Microsoft's dynamic markup language is still under wraps.
Signed,
the real portuno
http://search.messages.yahoo.com/search?.mbintl=finance&q=portuno_diamo&action=Search&r=Huiz75WdCYfD_KCA2Dc-&within=author&within=tm