Wednesday, September 26, 2007

Orgplus links up with Northern Health's Now Solutions emPath System

It's all out there - You'll have to use the Google to find the URLs. It will build character.

Northern Health Uses OrgPlus® to Ensure Post-Merger HR Data Accuracy and Structural Efficiency (Thanks for digging this up- Jes!)

Northern Health delivers healthcare to more than 300,000 people in a half million square mile area in northern British Columbia, Canada. The organization is made up of more than two dozen acute care facilities, 14 long term care facilities, public health units and offices providing specialized services. Northern Health physicians performed more than 24,000 surgeries, welcomed 3,000 babies and treated nearly 255,000 emergency patients in the 2004-2005 operating year.


Challenge
Northern Health was formed by merging together 40 independent healthcare facilities to provide these locations with central governance, planning and communication. The desired structure also allowed individual facilities to maintain local operating and decision-making autonomy. As a newly-formed public healthcare organization with more than 7,000 employees and an annual budget of $521 million, it had to ensure effective management of its converged workforce by instituting an efficient organizational structure.


Solution
The management of Northern Health understood that an intelligent organizational charting system would serve the company as a platform for workforce and structural planning. Key decision-makers from across the organization collaborated on a list of functional requirements for a solution. After a lengthy comparison of OrgPlus to other products, Northern Health selected OrgPlus based on overall look and feel of the user interface, the clean-looking charts for presentations and its competitive price. OrgPlus also outperformed competitive offerings in security, conditional formatting, integration with Microsoft Office and exporting capabilities to PowerPoint and Adobe Acrobat.

OrgPlus' dynamic organizational charts mapped the entire hierarchy of 7,000 employees for communicating the new structure, geographical locations and reporting relationships. The new chart allowed management to view the organization as a whole, as well as by its separate business units and allocated resources. Problems with the hierarchy were easily pin-pointed and corrected in order to reflect accurate reporting relationships and budgeting needs.


Results
As soon as OrgPlus was linked to Northern Health's Now Solutions emPath HR system, problems with the accuracy of their HR data were immediately identified. The HR department and approximately 300 department managers at various geographic locations were given secure access to edit charts, validate data and correct inaccuracies. With OrgPlus, the organizational structure now keeps pace with change. Changes are automated, eliminating the need for constant maintenance and correction by management. Problems with data or structure are easily identified and corrected in minutes in the emPath HR system.

Beyond improved data and structural accuracy, the intelligent organization charts dramatically improved communication across the organization. OrgPlus functions as a self-service directory where employees get answers quickly about reporting relationships, employee locations and contact information. Detailed profiles of each employee provide e-mails, phone numbers and addresses. The directory is linked to Microsoft Outlook allowing employees to locate the proper person to contact and e-mail them with a single click.

With the successful roll-out of OrgPlus completed, Northern Health?s human resource department is beginning to pull HR metrics from emPath into OrgPlus and is planning to perform workforce modeling, salary-rollups and other key metrics to prepare for future changes.

With OrgPlus, Northern Health has achieved effective central governance, planning and communication of its new organizational structure. Additionally within OrgPlus, it has a platform for workforce planning to help management make informed business decisions and plan change with confidence.


About HumanConcepts
HumanConcepts is the leading provider of workforce modelling and intelligence solutions. With its OrgPlus technology charting millions of employees for organizations worldwide, including 400 of the Fortune 500, HumanConcepts has defined best practices in organizational charting. OrgPlus integrates with HR systems to automatically create, update and distribute organizational charts for team collaboration, workforce planning and critical decision-making. OrgPlus integrates seamlessly with Microsoft Office applications.

HumanConcepts is based in California with offices in the United Kingdom and Germany and offers OrgPlus software and services worldwide. For more information visit www.orgplus.co.uk.

Sunday, September 9, 2007

A layman's View of VCSY part 3 - arbitrary

A Layman's View of VCSY part 3

“Arbitrary”: Latin arbitrarius = judged between

*****Section 1*****
>>>>WHY?

If an operating system may be virtualized and if an application can be virtualized and if a body of data can be virtualized, can a body of programming code be virtualized?

Of course. A chunk of code is a set of instructions by which the computer carries out the desired tasks. Sometimes the code is a monolithic body as in a single-file application. Sometimes the code is a collection of bodies as in library objects. Sometimes the monolithic code provides hooks to access modular procedural segments in the file. Sometimes a developer can build a hook by finding a particular direct index into the file memory. There is always a way.

Either way, virtualization opens up the ability to treat data bodies and their uses in an “arbitrary” fashion.

Now, that does not mean capricious or willy-nilly. Arbitrary means we can use something without having to argue or worry about whether the act of using requires any necessary concern for details. Arbitrary is the basis for “agnostic”.

Patent 6826744 aka 744 provides a novel concept for application building by allowing ANY code of ANY language for ANY machine to be fitted and used with ANY other code of all other arbitrary qualities for ANY purpose including retroconstruction (what? The system can have children.).

So, whereas virtualization fits data to be used universally, arbitration fits data to be used without concern for use. Agnostic fits data for universal arbitration.

Most programmers are aware of the prior art in “wrapping” a code object in a virtualization layer (usually using XML but may also be done by proprietary means) which allows that object to be used with other objects not normally fitted to co-mingle.

744 takes this several steps beyond by outfitting and treating all aspects of all data in such a way. This is done in 744 by compartmentalizing the data into three classes relative to application building and use: content – format – functionality.

Content and format management are well known concepts from the dotcom era. But, as with Microsoft's segmentation of Silverlight/Expression for designers and Visual Studio .Net for developers, the dotcom era products typically had gaps in integration that made building unified applications difficult and often impossible.

Arbitration removes the gaps between various data uses to bring Content – Format – Functionality under one shop roof allowing the designer to also be the developer (to also be the manager to also be the maintainer to also be the governor) without having to be concerned with how the design or development is actually carried out in code.

Although this sounds like magic, virtualization wrapping demonstrates this sort of idea clearly, although no other applications before SiteFlash (the deriving product body of 744) provided a means or an envelope in which to handle virtual forms of content, format and functionality as one application package... or else the patent examiner would have been able to find an example easily.

There are many who say that's not true but we've yet to be pointed to an example, so, until such time, we are in the position of the patent examiner who approved 744 who also saw no prior art to block the 744 grant.

What does this mean to software development?

Arbitration frees the designer/developer from having to know much at all (if any) about the underlying code that is doing the work. A virtual layer abstracts the mechanical workings of “code” into a useful semantic construct that may be used like a modular component.

Some of this capability can be found in newly created projects like MSFT Popfly, for instance. The idea being various functionalities can be packaged as 'snap-on' components so the program can be built without any programming.

SiteFlash embodies this idea comprehensively by delivering an “ecology” in which all data/code representing each of the three assembly categories may be intercombined to result in applications built from any resource.

The three areas: content (the information to be presented or processed) format (look and feel – interaction functionality) and functionality (program capabilities) are handled and managed in such a way as to abstract the entire process away from handling code programming tasks in order to elevate the usefulness of virtualization to the end user and away from the programmer.

*****Section 2*****
>>>>HOW?

Remember the sister patent 521? Remember how the “guts” or the kernel of the 521 process is a virtual computer built out of markup language able to process and produce markup language? Such markup language construction is able to build an abstracted construct around any data encountered whether content data (your name here) format data (your presentation here) or functionality data (your workflow here) so each compartment may contain all (any) content, all (any) format, and all (any) functionality from any resource (reachable via internet) to achieve all (any) computing purpose.

Each compartment is managed separately while each compartment element combined with other elements (of same compartment or different compartments) are likewise managed and arbitrated into an abstracted construction, transparent to the user and kept as a construct by the ecology.

How is that possible? There are a finite number of resources in any number of computers interconnected with each other. There are thus a finite number of possible configurations any application may be required to take. Within the body of a “program” there will be a finite number of possible components to affect a finite number of operational actions or functionalities.

A finite state machine (a deterministically transacting machine) is able to compute all possible combinations for all possible states and deliver the required combination to the particular use requirement at specific states.

All this is made possible because the inner workings of 744's virtual machine and abstracting language are based on markup extensibility just as 521 virtual machine and native programming language is made up of markup extensibility. AND such extensibility is dynamic as the markup is a script and may be compiled for ultimate package use at the client but may live forever in the ecology as a malleable script... which may be repurposed as desired for multiple uses... given the component nature of the compartmentalized content/format/functionality.

These kinds of statements make programmers angry because they are the attainable goals everone has said is an intuitively reachable goal, but McAuley and Davison are the first to build solid and demonstrable constructs to achieve these lofty “intuitions” wished for by IT heads. I don't think the patent office patents intuitions and wishes. I do know they patent solid and demonstrable first art constructions.

That's why VCSY owns these two lawfully granted patents. That is why programmers everywhere are likely angry at VCSY. Possible angry enough to say some very stupid, libelous and tragic things.

I realize it sounds like magic just as Forth programming sounded like magic to traditional language programmers in the 70's and 80's. Because FORTH used a tiny virtual machine as the interpreter for all FORTH primitives built into vocabulary words, the programmer could fashion the application to do whatever he liked by defining words (objects) in a vocabulary (library) into compilers and interpreters which were thus used to construct the program. Then, the program is assembled by placing the words from each vocabulary in an appropriate sequenced position in a newly defined word. This word (procedure) was likewise placed in the vocabulary (library) allowing the developer to now use an abstracted form of the workflow he was attempting to achieve. Thus, Forth could evolve to become human language syntax which was the basis for further programming construction.

SiteFlash is a cousin to such a concept but distinctly different in execution.

The traditional procedure-assembly of objects that ultimately provide extensibility is the foundation for all current and prior IDE's and is what confuses 744 readers who quickly forget what the patent is saying and begin crying foul for something so “obvious” and “overly broad”. They are wrong because of the nature of 744 and 521 construction which render the very fabric of these patents extensible BY NATURE: At the core, as it were. NATIVELY extensible so the kernel can become anything at any time for any use.

So, “obvious” and “too broad”? Not so. The elementally extensible virtual machine concept (instead of a procedural compiler assembling object blocks as in procedural languages like C) allows the programming base elements to take on ANY form, thus, the VM may change to accommodate the code, allowing to abstract code chunks without changing or even touching the code.

Traditional programming languages force the programming environment to adhere to a carefully structured syntax and form with the objects providing some measure of abstracting capability.

Extensible programming languages and ecologies allow the programming environment to morph dynamically (real time) to accommodate the changing needs of the code user without ever having to change the base code.

Thus, write once, use many becomes write once use ANY.

Some will think it's a matter of semantics, and it is. Semantics describe architecture and operational structure which are what ultimately determine real capability and reach. The word to hack between the traditional method and the 744/521 method is “extensibility” - the same “eXtensible” found as the base abstract in XML.

So, the bottom line is; any “virtual machine” dependent on an object oriented/library based code pool (like JAVA) will offer only a limited abstracting facility because the “virtual machine” employed is not itself extensible. It is what it is and the main office will have to bring their developers to bear on the kernel to build in further “extensions” when they get around to it or when the corporate office figures they've pissed off enough customers.

With a markup language based kernel, the kernel may morph at any time to whatever extension necessary, then become whatever other entity required next.

Cool, huh? That same capability can be found in the 521 patent which now explains how 521 can virtualize and abstract in a very granular way while 744 virtualizes and abstracts in a platform wide way.

This adaptability of the virtual machine to the virtual use results in very small code bases that serve very large programming frameworks.

Such capability finds a greater kinship with simulation/emulation than “programming”. Thus, 744 SiteFlash and all derivatives may simulate or emulate any abstraction of any code (it may even become a particular computer architecture as needed – thus virtualizing and abstracting even computer hardware into an emulation of any hardware process) at any level or scope just as the 521 patent may emulate/simulate any data virtualizations/abstractions at any level or scope including at the machine code.

This pseudo “overlap” in 744 and 521 provide a seamless and bumpless capability for an “application” to integrate as an operating system + application + any resources needed... into a single package that may be shipped to the client and run.

And that brings us to 744's strength: massive affiliation.

Because 744 can integrate ANY abstraction, the user may employ various standards of specification, construction, management, maintenance, governance at any targeted point in the entire framework.

This capability is echoed in 521 so scaling is extensibly flexible according to the information theory speaking to granular construction of abstracted components to abstracted applications. 744 covers component assembly of abstracted capabilities of abstracted applications to abstracted frameworks and ecologies.

Reversely, the extensible nature also allows 521 to create applications that create operational components. 744 allows ecologies that create operational cultures.

And then you can take those and fold them into another evolution of components and cultures ad infinitum.

So, Siteflash can accomplish all this by itself using markup and code libraries. But, adding 521 allows 744 to fly. So 744 by itself? OK. 744/521? Unbeatable.

Then, there is the ability to massively affiliate this “program” to all users anywhere on any machine with any requirements. The program continues to be a part of the ecology for its entire life-cycle (as long as an internet connection is available or cache is long-lasting and comprehensive enough which may be seconds, minutes, days, weeks, months, years... etcetera etcetera etcetera) and is able to track its processes by virtue of the granular governance and audit capabilities in elements built out of 521.

How? The package arrives with all necessary requirements (which have been extracted from the metadata passed between server and client machines) that allow all possible conflicts in construction and use to be resolved dynamically in the SiteFlash base before the package arrives. Once there, the package has been abstracted for that particular use case (requirement/design/construct is an abstracted workflow in SiteFlash so it self-assembles from object in the library to application body) and is fitted for THAT specific use.

Because SiteFlash is an ecology covering ALL aspects (which may be added to or taken from the libraries at any time) of the software use, the delivery tailored for each computer allows large communities of computer users to engage in community collaboration with the level of their local client being factored into the use.

THAT is where computers should be right now. But, they are sadly not, except in some areas which appear to be using SiteFlash capabilities apparently under some sort of use permission by the inventor and VCSY.

This embodiment of arbitrary use frees the machine's ability to virtualize data for human use and to allow machines to take over a larger part of the build process.

Onward and upward.

And just a word about “obvious” and “too broad”. These are not words subject to the wishful fantasies of people so entangled in the life and death struggle for paradigm relevance they know only their part of the river and parts downstream. The effort from C. Babbage through A. Turing and through C. Moore has been a greater sophistication toward simple elegance in processing numbers and ultimately human words and abstractions. That is the goal even though others want to hold their monopolies on productivity and advanced thought.

A programmer today is not intellectually fitted to explore parts of the river upstream because his knowledge base requires he either acquire new knowledge or invent new knowledge. The first is far easier than the second if the programmer can swim against the current. The first is just as remotely possible as the second if the programmer will only kick and scream at the shoreline. The first is more impossible than the second if the programmer floats on his back and blows bubbles.

Such is the nature of abstraction that the closer we get to machine intelligence the more stupid our workers will become. We only see hints of it now. We will one day wonder how we didn't all drown in our own goo in our sleep.

Saturday, September 8, 2007

A Layman's View of VCSY part 2 - virtual

I realize these will not qualify as "Layman" explanations as a certain amount of information processing theory must be available to cover the ground.

I will attempt to further simplify although questions can provide a catalyst for reducing complexity and confusion.

Layman's view of VCSY part 2

In order to explain why MSFT is blocked by VCSY patents, I will have to explain what needs to happen (Section 1) and how that happens (Section 2) with VCSY's patent 7076521 aka 521.

I will attempt to provide a “Short Story” version, but, for now, this is the long version.

*****Section 1.*****
>>>>WHY?

To decipher why VCSY patents and patent derived products trump the capabilities of other technologies, we need to examine two key words: Virtual and Arbitrary

“Virtual”? Virtual means “not actual”. “Actual” means the real data being contained in an application or available for direct processing by the application. Essentially “Virtual” means a faithful representation of the actual format. Further from there, virtual means a faithful execution of an actual process in another processing form.

Virtualization is a large buzzword currently come into IT vogue given the public exposure over a coming “battle” between VMWare and Microsoft Viridian and other virtualization products. Virtualization will revolutionize the use of operating systems and applications and bring enhanced value and productivity from new and legacy systems.

A fundamental tenet of virtualization teaches that anything “soft” (as in soft-ware) can be represented as something else soft without changing the fundamental operating qualities. Data is hard reality handled in a soft form without changing the essential “being”.

Data can only be consumed if it is presented in a useful form.

Data may be “native” to a system, meaning the data is readily understood and processed because the application is built to use the data in the presented form.

Or, data may be “foreign” to a system, meaning the data must be processed first into a useful format (form) before the system can then treat that data as though it were native.

An analogy can be seen in the difference between Metric and English measurement. Both are “values” and therefore are “data”.

Both measurement systems are readily understandable to humans. But, unless a means of transforming one value into another via reference table or some means readily, one data format will likely be considered native and useful to the consuming human culture and the other data format will be foreign and unusable.

A system processing native data may produce and consume that data in any way desired by design. There are no boundaries to that processing other than the creative limits of the builders. However, ingesting one small piece of foreign data misapplied will stop the system cold no matter how brilliant the system designers unless they have built into the system a way to handle the unexpected or foreign.

Designing for all possible screw-ups or surprises is what builds cost into a system. Simple is cheap.

Thus, a cheap, simple system processing native data will do well as long as the foreign requirement is never faced. It is not a matter of resources. It is a matter of presentation. Even the most expensive and complex system will become dumb as a fence-post with foreign data.

So, Is the solution X: To learn all possible forms of expression? Or is the solution Y: To provide one universal form of expressing all possible forms? Decades of computing design demonstrates option X is expensive and difficult while option Y is inexpensive and easy.

So, the data between different systems should be standardized into a single universal form so that all applications may apprehend and consume that data without having to modify the established methods built into the proprietary system.

That's a tall order since all cultures consider their proprietary standards to be superior to other cultures.

Fortunately, in the data world, there is a standard that allows this universal presentation and that standard is called XML (for eXtensible Markup Language). Remember this: All XML does as a “programming language” is to represent data as structure and value. Fortunately, as with all elegant solutions, that is all that needs to be done to achieve a basis to achieve computing virtualization.

But, you will find, it is not the expression that virtualizes but the processing available to arrive at that expression that does the virtualization.

If proprietary application A and proprietary application B send each other copies of their respective native data presentations, chances are they won't be able to work with either as any small portion showing up as a foreign article will cause confusion and error or downright failure. Unless the builders built or modify the applications to work with each other using a same data format, interoperation is not possible.

However, if app A and app B can simply transform each of their native forms into XML and back again to native, the XML that meets in the middle will represent a universally understood form any other application with a standard XML capability may process. Thus, instead of having to modify an application to work in a new system, the application may join the system as is and the availed XML allows that application to work with any other application in the universe of “interoperating” apps.

The data in each application can be said to have been “virtualized” or changed from actual to virtual because the XML representation of the data is a transformed instance of the value in a proprietary format and structure. Unless the processing consumes XML directly (something we will cover in examining “arbitrary”), the native form of the data is the actual or real form being processed by the applications. The virtual form of the data is what is being pooled for the entire system to use for communication (again, and/or depending on if applications can process XML directly – we will cover in “arbitrary”).

So, to achieve at least the first level of virtualization, we go from native A to XML to native B. We can, of course, then go from native B to XML to native A. We thus have “electronic data exchange” which is the simplest form of virtualization and a mainstay of XML “programming” in IT systems for the past decade+.

Virtualization as a simple core capability is thus the ability to universally represent the native form of any data in an application.

The XML standard is designed to allow all builders to virtualize data for use by any other builder.

HOWEVER.

XML does not have to be used to virtualize. “Virtualization” per se only requires a standard agreed upon by consenting builders. A builder may use his own standard or a standard agreed upon by other partner companies. This is often done by partnering builders to exclude competitors; a key of sorts letting folks who know the standard to enter the clubhouse and keep others without a key out.

So, even if XML is used, companies can develop various proprietary ways to process the transformation to and from the XML presentation, thus allowing only those applications adhering to the builder's proprietary XML standard to join in processing that data.

This is the case with the Microsoft version of XML for Office application document files called OOXML. Microsoft's variations in the base XML standard is the source of conflict in the international standards community where the ODF standard presents XML that may be processed by ALL applications, whereas the MSFT version of XML has been crafted to exclude non-MSFT application uses.

So, virtualization may be applied in a number of different ways for different purposes.

For example, VMWare uses proprietary methods to virtualize the targeted operating system/application relationship so the OS/application combinations may be used on different operating systems. The virtualization method is proprietary so you can't simply plug your operating system into the VMWare system unless VMWare has crafted the system to handle your OS and apps.

Sun ZFS uses proprietary ways to virtualize the data in the operating system so outside non-Sun applications may communicate with the OS without having access to the proprietary OS command and data bodies. This is a more “granular” or lower level type of virtualization more akin to what VCSY can provide. Sun ZFS does what Microsoft WinFS was supposed to do before WinFS was killed. Sun has a read/write version allowing applications to pass information back and forth between the Sun OS and the application across the virtual path. Apple has a read-only version of ZFS allowing apps to get values from the OS but unable to pass data back to the OS.

Microsoft OOXML allows Microsoft and partner applications to process a virtual representation of the MSOffice document data while preventing non-partnered applications. This is called “lock-in” ensuring the users will have to buy MSFT/partner products to join in the processing interoperation.

If these companies were to convert their applications to present all their operations to the outside world in international standard XML, ALL other applications would have an opportunity to join the party. So, why don't they?

The most prevalent argument has it the vendor doesn't want to commoditize their products allowing them to be accessed and intermixed with other vendor's products. But that seems a lame argument when analyzed against the resulting customer value and the value added to each vendor's products (unless the virtualization uncovers intentional lock-in values built in by a vendor – THAT is the fear).

The next argument has been that virtualization is difficult and that is true if modifications to your current product line are required. THAT is a valid argument and a real problem and is a key reason for using 521.

521 does not require modification to the application.

Why? Here is the biggest kink; Being able to virtualize data is ONLY the first step in the process. The idea is to virtualize the computing process that produces and consumes the data.

Simply being able to export XML and import XML are large steps in application evolution but by no means the end of the road. Without a way of passing information about the data and the process (data called metadata or “data about data”) the applications are flying blind.

Various parts of any body of data being virtualized into XML will likely be used by the applications as values. Other parts to be consumed may be metadata. Still other parts of the body of data may be event triggers, commands and process state parameters to be used by the various applications to make sure all interoperating processes are carried out properly. These elements must all be available as a virtual product for the application interconnection to be conducted properly.

So, since an application must FIRST be able to express the application to the outside world as XML, we now find the biggest current obstacle: Routines to import XML and export XML are not always available and processing data are much less likely to be available.

Although Microsoft (not to pick on MSFT but they are a perfect example – the rest of the world is worse off) is a large company with much resources, only a portion of their products import or export or process XML. That means they must either modify those applications to import/export and internally process for XML or use an outside application the industry refers to broadly as “middleware”.

Legacy systems built before circa 2000 likely do nothing with XML and rely solely on the native data forms to operate. These must have middleware solutions appended to their operations to virtualize as modifying legacy systems are often difficult to impossible.

Modern systems also likely have little or no XML capabilities as the industry has battled with various methods of implementing solutions to the legacy problems (as their software becomes legacy to future XML installations the moment shipped). The least troublesome solution is typically to not provide the capability in the application body itself but to hold off for further development which is likely a middleware solution.

So, how does a middleware solution used with legacy and proprietary applications allow them to be virtualized?

Now we come to the reason for patent 521 and the reason companies like Microsoft and various others are unable to do much more than the simplest interoperating processes using their own technologies between like or dislike applications.

*****Section 2*****
>>>>HOW?

How Doooo they Do it?

The title of 521 is a “Web-based collaborative data collection system”. This title is deceptively simple, as the action described by that title phrase alone allows a wide range of virtualization capabilities.

You may notice the patent describes all the various elements required to form a computer. Usually computers are build as processing assets on a hardware chip. But, 521 is a virtual computer also known as a virtual machine or VM [1].

The concept of a virtual machine is not unique. The architecture of this particular VM IS unique, however, and is key to the novel 521 capabilities. I will be explaining only the simplest operation as that may jog further thinking for other solutions.

The 521 virtual machine is designed to run on a web platform such as a browser, yet able to connect to data in a proprietary application (including an OS since that is an application albeit the underlying “base” or “platform” application between the applications and the bare hardware referred to as “bare metal”).

Because the 521 is typically used to augment and support programming workload on the client, the 521 product is called an “agent”. Actually, an agent is what can be built using 521 claims for 512 is actually the creative ecology for all derivative agent products and claims.

What that means is 521 is what you build 521 products (which are agents) out of.

We will do the simplest use of 521 here before we graduate to larger project capabilities and patented product derivatives which are potentially very many.

So, we have a “web-based” agent which may reside on the browser (or on the OS or on bare metal, but more about that later) which may retrieve proprietary data on the local machine, process that data and present that data as an XML representation.

The XML data may be transported by the agent via http to any other real or virtual computer for further process. It is the http protocol that makes the agent “web-based” and does not limit the agent to residing on the browser.

In our virtualization examples above, the other “target” computer would use a similar [or different – doesn't matter] agent to transform the produced XML into the proprietary data form and place that in the proprietary data store for that client application to work with.

At that point the client could be a anything from a mainframe or a micro-device.

That is essentially one of the simplest “bridge”s you can build and is a base concept that describes one of the 521 children; the “XML Enabler Agent”.

In this capacity, the 521 derived agent can act as an agent for a proprietary application to virtualize any size data body in the application datastore. That virtualization may be expressed as anything (not just XML) as various programming inputs on the MLE, so the proprietary IN, XML or anything OUT is a beginning of a universal virtualizer.

SO, COULD MICROSOFT DO IT DIFFERENTLY?

Sure. They can modify the various individual server-based applications they own to be able to express XML to the client and return received XML back to the proprietary expression on the main server or some other server (what SOAP does). This configuration would preserve their “server-centric” philosophy, but, they would then not be in a position to act as a replacement for the main server should the internet connection between the client and the main server be down. Such “off-line” processing while waiting for the on-line web connection to return, is a significant point of contention by SaaS critics.

Such on-line/off-line capability may only be served by a processing facility at the local client machine. This is what the agent does.

MSFT can provide executive agents of their own at the “end” of RSS pipes at the local client (this would leverage the inbred subscription/transaction activity in RSS) with XML routers to pass messages to various executive elements on the browser or client OS. The problem there is all processing of the proprietary data to XML must necessarily occur remote from the local client at the main server. Therefore, latency between the processed answer and the arrival at the client is exacerbated by no off-line processing capability.

This RSS method is a key component of Ray Ozzie's vision for providing MSFT's software+service distributed computing. While it works with a Remote Procedure Call architecture like SOAP, it does not qualify as “distributed computing” but, rather, distributed services.

The off-line processing vulnerability is the key obstacle inherent in the RSS method. The RSS method still needs a local agent to perform processes while the web is off-line (which renders the RSS silent until connection is resumed).

So, the above two example workarounds show the answer to “Could MSFT do it differently?” is a qualified 'yes, but'. The solution does not meet the requirements of true web application construction and does not accomplish virtualization at the client but requires virtualization at the main server – a method which has been the traditional means of providing web pages since the 1990's.

The main reason VCSY's 521 is superior to MSFT method is found in 521's ability to act anywhere in any configuration under any circumstances. “Proving” that is simply a matter of walking through thought experiments with various architectural configurations being exercised to watch the issues faced by designers. This can be done by 521 against any other vendor methods and 521 holds flexibility, scale, and power advantages throughout. I will gladly provide such thought experiments in greater detail.

I believe the virtualization and client-side server capacity are inherently desirable compared against the traditional RPC method.

The sister to “virtual” is “arbitrary”; a term not heard in current buzz because it has not been cataloged by any mainstream provider. Arbitrary is the keyword for the 744 patent which is the one MSFT is being sued for infringing in .Net.

I think you will see where in .Net such “arbitrary” capability might be needed and where it appears to be or have been.

Footnotes:

[1] Virtual Machine aka VM is a data processing computer that is not actual or “real” but is built out of software and runs using internet protocol (in this discussion) to communicate with other computers whether actual or virtual. The 521 VM is a virtual micro-server able to perform data communications between internet systems using the http protocol. It is thus a web server that runs on the local actual machine. A VM like 521 may be called a “runtime” [3], but 521 calls the VM with accompanying processing resource streams an “agent”.

[2] The 521 VM is comprised of primitive elements that perform disk and other resource I/O operations. These elements may be assembled into a workable application accessing the proprietary resources and functions of the underlying platform (typically the OS running the browser) by invoking the primitives in a dynamic markup script. This is the “program” that builds the executive functions and workflow of the web-based application.

[3] The term “runtime” is a proprietary word denoting the operating executive kernel residing as the first and central element of a VM coupled with a programming language to access the resources connected to the kernel. Typical of such a description is Microsoft's Common Language Runtime, which is the means of actioning Windows, and the new Dynamic Language Runtime, which is designed to operate on the web and run a markup-based programming language.

[4] VCSY's “dynamic markup language” Emily (the executive kernel is the Markup Language Executive) was introduced by VCSY in 2000. Microsoft's dynamic markup language is still under wraps.


Signed,
the real portuno
http://search.messages.yahoo.com/search?.mbintl=finance&q=portuno_diamo&action=Search&r=Huiz75WdCYfD_KCA2Dc-&within=author&within=tm

A Layman's View of VCSY part 1a/b - background

Note, I am placing these posts on this board because Yahoo has a 4000 character limit on posts there.

The following was posted on the Yahoo/VCSY board:

http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?m=tm&bn=33693&tid=361&mid=361&tof=1&frt=1
Message for Portuno or any other long (Not rated)
8-Sep-07 04:37 am by ns5000
Can you make a case for vcsy for a non-techie or "traders" as you put it.

Please do not put any links. Just explain in simple terms why msft can't develop vista or viridian or .net without the 2 patents you have been talking about.

I could consider investing purely as a speculative buy if you are able to spell out your case clearly in simple terms.


I responded with this and will add to the effort shortly:
http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?m=tm&bn=33693&tid=361&mid=369&tof=1&rt=1&frt=1&off=1
Layman's view of VCSY issues Part 1a (Not rated)
8-Sep-07 12:05 pm by portuno_diamo
One quick explanation I can provide is this:

Microsoft's entire development philosophy has been centered around their operating system on the PC for decades.

During those decades, VCSY's inventors' life work focused on the network and specifically the internet as a network.

The VCSY inventors worked on distributed networked computer concepts while MSFT worked on the personal local computer network.

By the time the internet was mature enough to have a "boom" centered on it, the VCSY work was mature enough to deploy, thus VCSY work was made available to the market in 2000.

Microsoft had only been able to build a browser and was only then branching out to attempt to mature smaller experimental efforts. Thus, MSFT's work comes to the market in 2008.

From 2000 to present, VCSY has been able to further develop their work.

At the same time, MSFT struggled to come up to speed on web operating systems and web applications within a corporate culture that dismissed the internet as a valid platform for building anything more complex or robust than electronic magazines.

THAT cultural difference between MSFT design and VCSY design is what sets the stage for the power behind the VCSY patents.

Remember what happens in design/build. The designer considers the problem and comes up with A solution. That solution may or may not flesh out properly. Revisions of that solution or entirely new solutions attempt to address the newly met issues and often require further mods or remakes based on new issues that crop up.

It's just like writing and having to do repetitive drafts to come up with the most elegant form.

VCSy's inventors have been doing this process for decades. Microsoft has been doing this in a limited way decades later than the VCSy inventors made it through their most basic obstacles.

The trader simply needs to consider this fundamental difference to see the effect of MSFT's "internet is a fad" philosophy. VCSY was able to virtualize and arbitrate (I will go into more detail why these two concepts are central to breaking free of the "PC" OS) whilel MSFT was hell bent on keeping everything locked to the PC.

Some say MSFT has superior resources, but, resources is not the issue. Simplicity and elegance in response to all the various screw-ups and dead-ends encountered in the human design process is key.

(continued in next post)
Layman's view of VCSY issues Part 1b (Not rated) 8-Sep-07 12:05 pm
(continued from previous post)

If resources were the problem, the most critical issues could be solved more quickly by simply adding people and money. So what YOU do at work should be able to be done by 20 times that many people in 1/20th the time. We all know it doesn't work that way in design because each of the 20 times more resources will encounter the same sets of issues and come up with various ways to solve them. Without the collaboration tools to bring those issues and solutions to the front so the 20x resources can contemplate them as one mind, the result is not 20 years of collective experience but 20 one year experiences.

If MSFT could have done what VCSY can do, they would have done it back in 2003/2004 when they were high on XML and touting Longhorn and all the other parts and pieces to anyone who would listen. Instead, it all went into a vault and has STILL not seen the light of day. All that is STILL not on the market because MSFT is only now issuing test versions promising to deliver even more test versions in 2008.

Meanwhile, the record shows VCSY has been quietly working to mature the solutions that were available as products for sale in 2000 and 2001.

Big difference.

Based on information VCSY longs have found, after the dotcom boom went bust, taking VCSY's shareprice down, VCSy was quietly working with others to flesh out their philosophy.

In the next post (which will require some simplification and I won't have time to do until later this evening) I will explain HOW 521/744 are technically superior to anything MSFT or any other vendor's have built for network architecting.

Then, I will explain "virtual" and "arbitrary" so you may see why the VCSy method is fundamentally superior to anything MSFT or any other vendor's have built for building, operating and controlling networked applications.

Hope this helps. There will be more of
the real portuno
http://search.messages.yahoo.com/search?.mbintl=finance&q=portuno_diamo&action=Search&r=Huiz75WdCYfD_KCA2Dc-&within=author&within=tm

Tuesday, September 4, 2007

IBM Patent 7,058,671 referencing VCSY 6826744

Baveman found this IBM patent referencing the VCSY SiteFlash patent. I have attempted to describe the patent purpose and cited reference patents in a narrative below. It could use some work to refine so that's your homework so you can verify for yourself what it is you are seeing.

The IBM patent 7058671 appears to be aimed at automating the construction of programs on the web.

http://patft.uspto.gov/netacgi/nph-Parser?u=%2Fnetahtml%2Fsrchnum.htm&Sect1=PTO1&Sect2=HITOFF&p=1&r=1&l=50&f=G&d=PALL&s1=7058671.PN.&OS=PN/7058671&RS=PN/7058671
ABSTRACT: A method and system for delivering dynamic web pages in the INTERNET. Compiled programs embedding static queries to a database are stored on a server computer; view templates with HTML tags defining the layout of corresponding dynamic web pages and data tags instructing where and how to include each record of the query result into the respective dynamic web page are further stored on the server computer. When a dynamic web page must be distributed, the corresponding program is run, and the query result is stored into a shared memory structure. The query result is combined with the corresponding view template, by replacing the data tags with the associated records in the shared memory structure. The resulting web page is then distributed to client computers of the network.


If you look at the description of the patent uses, there are no "users" involved. The patent performs work in an aggregating complexity with the aim of producing a deliverable sent to the user at the client. The deliverable is an application aka program and the patent claims the "...means for sending the view structure to at least one client computer of the network for causing the view structure to be displayed on the at least one client computer, wherein the corresponding view template is generated by a compiler and has the form of a directly executable program."

Thus, the end result is an autonomous system assembling blocks of "information" comprise the necessary content, format and functionality information into a "view" template which is then compiled into program (application) and delivered to the client for application use.

That's it in a nutshell. As any application may be comprise of content, format and functionality:
1. content (text, images, sound - anything humans or machines may produce as a deliverable of processed data - metadata attached to this content is available to guide the machine on the various processing requirements)
2. format (specifications and objects that define how the content will be presented to a human user and/or a machine user)
3. functionality (the stepwise instructions [via diagrams, outlines, narratives - think UML and other program modeling methods].

I've tried to throw together an explanation of what IBM patent 7058671 does and what the cited patents add to the discussion. There is plenty room for improvement and jsut a result of an extra cup of coffee and an English muffin with extra butter. This is a rough analysis and is my own opinion. It may be further refined and that's what you the reader should do so you can understand this concept thoroughtly. This is a foundation for modular constructed applications and delivered and used handled autonomously by machines under human supervision.

Below are citations the patent rests its claims based on evolution of past patented prior art. The citation describes what past patents reflect as founding or similar methods and advancements. The patent describes what activity regarding those patents and additional advances described in the claims is used to create the unique invention qualities.

First, view the patent refences by themselves:
Patents Cited in the 7058671 patent (aka patented prior art):
Patent Number
Title
Issue date

5835712
Client-server system using embedded hypertext tags for application and database development
Nov 10, 1998

5894554
System for managing dynamic web page generation requests by intercepting request at web server and routing to page server thereby releasing web server to process other requests
Apr 13, 1999

6055570
Subscribed update monitors
Apr 25, 2000

6275830
Compile time variable size paging of constant pools
Aug 14, 2001

6826744
System and method for generating web sites in an arbitrary object framework
Nov 30, 2004

As you see them listed in chronological order, you may also see an evolution of capability providing a foundation for the present claims of this to be validated patent claims against.

This is a ratification by progressing art extension of the claims of the cited patents.

Now, view each of the cited reference patents fleshed out individually.
In this example the progression of capability is fairly easy to see if we dissect and display each:

1. 5835712
TITLE: Client-server system using embedded hypertext tags for application and database development
ABSTRACT: A system and methods for rapid deployment of World Wide Web applications on the Internet. A preferred method provides a template, accessible to both client and server, for constructing Web source text. The source text includes HTML tag extensions for implementing dynamic Web environment. The tag extensions are nested and grouped to form scripts to perform specific tasks, such as state construction and on-line data arrangement. Each tag extension or script is expanded and replaced with data value to be embedded within a traditional HTML tag. A processor is employed to process templates and execute tag extensions therein, and produces pages in pure HTML form for displaying by any Web browser.

1. Says we can equip a template shared between the server and the client to build web code using HTML "tag extensions" (encompassing XML) to build web pages as required on the fly (dynamic) having the capability to act as functional processing applications ("state construction and on-line data arrangement").

Huh?: Web pages can be more than look, feel and interaction. A web page can be an application or even an operating system in itself. This patent addresses a method to build applications out of web pages on a changing basis as requirements, resources and specifications change. The first step toward autonomous web operation as the machine may understand and construct web components.

ADVANCE: Instead of relying on old art traditional methods of identification and indexing content, [1] provides tagging and templating methods to machine-build programs using "web pages".

Key: TAG
---
2. 5894554
TITLE: System for MANAGING dynamic web page generation requests by intercepting request at web server and routing to page server thereby releasing web server to process other requests
ABSTRACT: The present invention teaches a method and apparatus for creating and managing custom Web sites. Specifically, one embodiment of the present invention claims a computer-implemented method for managing a dynamic Web page generation request to a Web server, the computer-implemented method comprising the steps of routing the request from the Web server to a page server, the page server receiving the request and releasing the Web server to process other requests, processing the request, the processing being performed by the page server concurrently with the Web server, as the Web server processes the other requests, and dynamically generating a Web page in response to the request, the Web page including data dynamically retrieved from one or more data sources.

2. Says we can create a system whereby we get "...a computer-implemented method for managing a dynamic Web page generation request to a Web server" to allow the server to delegate processing to other servers more targeted for handling generation of product.

What the...?: This builds a supervisor machine which can take in requirements from all areas and delegate the delivery workflow to other machines. The machine becomes an autonomous contractor.

ADVANCE: Instead of relying on monolithic systems using a single server to receive process and deliver all web page product, [2] provides task management by delegating production duties to distributed servers with their own distributed reqources by a request processing supervisor. This allows servers to compartmentalize various processing needs on server by server basis rather than segments of servers.

Key: DELEGATE DELIVERY
---
3. 6055570
TITLE: Subscribed update monitors
ABSTRACT: A user can monitor changes to information located on a network by registering with an update monitor service. The update monitor service can run as a stand alone server in the network or can run on a user computer or on the computer of an Internet Service Provider. The update monitor service obtains information about changes to information being monitored for the server on which the information is located or from a comparison of old and current versions of the information. The user can modify the list of information sources to be monitored by the update monitor service.

3. Says we can provide each client with appropriate methods to monitoring current product delivery for compiance with latest instances of specified requirement governance.

Duhhh...?: Each client can be monitored by an objective seperate policing facility tracking the delivered state and the required state of information provided the client use.

ADVANCE: Instead of running blind and waiting for the next update to trickle through the system, [3] provides a means for independent and objective machine monitoring of client delivery and compliance may be provided in real time from an objective and independent source. This allows the machine to know its current state and its current health and proper work ethic. A critical element of dynamic autonomy with traceability.

Key: POLICE
---
4. 6275830
TITLE: Compile time variable size paging of constant pools
ABSTRACT: A method and apparatus for paging data in a computer system is provided. A set of data associated with a program unit is divided into pages such that no item of the set of data spans more than one page. The size of one page may vary from the size of another. When the program unit is compiled, metadata is generated that indicates the division of items into pages. At load time, a page mapping is generated based on the metadata. The page mapping is used to locate a item that belongs to the set of data. Other parts of the program unit, such as byte code, can contain references to items in the constant pool. Each reference specifies the number of the page in which the corresponding item will be stored at runtime, and the offset of that item within the page.

4. Says we can provide a means for metadata to identify and direct access to data structures throughout a mass of data within the web page, web site, web framework, web world. That metadata may be used in machine code thus enabling a machine to autonomously select, retireve, provision, and employ process and data units throughout the internet delivered processing mass autonomously.

But uhh but uhh but uhhh...?: The machine needs to know where the data is just as an operating system needs to be able to touch compounded masses of data for processing within memory resources in a hardware operating system or a software operating system.

ADVANCE: Instead of clueless web pages and web sites, [4] allows each supervisory metadata package to contain its own analog of the traditional disk operating system File Allocation Table (FAT) which represents the structural architecture and boundries of the machine. This was once found only in hardware processing resources. [4] says the virtual page, website, framework has the capability to natively identify by address index any data anywhere in the structure. This is a key to machine "awareness" of the application space.

Key: CONDUCT
---
5. 6826744
TITLE: System and method for generating web sites in an arbitrary object framework
ABSTRACT: A system and method for generating computer applications in an arbitrary object framework. The method separates content, form, and function of the computer application so that each may be accessed or modified separately. The method includes creating arbitrary objects, managing the arbitrary objects throughout their life cycle in an object library, and deploying the arbitrary objects in a design framework for use in complex computer applications.

5. Says the above masses of data and processing capability may be segmented in segregated operations dealing with content and format and functionality arbitrarily-[5] to combine these into affiliated-[5] web applications-[1] using virtualized and arbitrated resources-[2] and arbitrary governance-[3] of addressable masses-[4] to provide a means to build applications without the need for programming knowledge or skill [5].

Blink Blink...?: A machine simply needs to know [1]-what it touches [2]-where the solution is [3]-the requirements are being met real time [4]-where to touch and [5]- how to create. The building process (design/develop/deploy/determine) of applications at every knowable state may be further processed to build applications for human or other machine users autonomously (humans construct the elemental pieces of the system resource pool - the machine is able to pick up the task from there and complete the building process) by machine.

ADVANCE: Instead of human teams of programmers building applications (even if or especially if they use manual implementations of various design, develop, model environments), programming becomes an abstracted processing ecology that may be abstracted toward the human (5 as SiteFlash) or toward the machine (5 as the creating element in this patent).

Key: CREATE
---

So you create a system that can TAG,DELEGATE,POLICE CONDUCT,CREATE and funnel that activity into a system that delivers the machine built applications to the client for deterministic use tailored in usability, culture and governance to the user's unique requirements.

The user is the computer.

Viola! Gimme two beers. One for the janitor.

Congratulations. You've made it over the bridge. Now, take that farmhouse and await further orders.

Monday, September 3, 2007

Reference: Inventors for 6826744 and 7076521

I am placing this here as a place holder for pointing to by the Yahoo VCSY board as the Yahoo message forum has a 4000 character limit. I may use this method of supplying information to boards without burdening the posted messages with large bodies of text.

Thanks to Morrie and all the other contributors here and thanks to all VCSY Longs ("long" VCSY stock aka investors aka treeforters native or honorary) for your work in uncovering, collecting and distributing information and opinions to the VCSY Long social network.

To Wit:
(REFERENCED FROM: http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?m=tm&bn=33693&tid=328&mid=328&tof=1&frt=1 )
The inventors behind 6826744 (Aubrey McAuley) and 7076521 (Jeff Davison) hold pioneer status in the field of network framework syndication (McAuley) and network automation management (Davison).

For convenience:
patent 6826744 (aka 744)
http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=6,826,744.PN.&OS=PN/6,826,744&RS=PN/6,826,744
System and method for generating web sites in an arbitrary object framework

patent 7076521 (aka 521)
http://patft.uspto.gov/netacgi/nph-Parser?u=%2Fnetahtml%2Fsrchnum.htm&Sect1=PTO1&Sect2=HITOFF&p=1&r=1&l=50&f=G&d=PALL&s1=7076521.PN.&OS=PN/7076521&RS=PN/7076521
Web-based collaborative data collection system

(CONTINUED FROM : http://messages.finance.yahoo.com/Stocks_%28A_to_Z%29/Stocks_V/threadview?m=tm&bn=33693&tid=328&mid=328&tof=1&frt=1 )
---
Aubrey McAuley:
McAuley came in contact with pioneering work in distributed computing through his background in on-line comic book publication (funny, ain't it? - see http://www.comicbookdb.com/creator.php?ID=13032 ) which requires an entirely different boundary of capabilities than the traditional work limited to syndication of data typified in parallel or hive computing.

From the unique problems encountered within the realm of syndicated distribution of functional graphics, McAuley developed a method to integrate the concepts of content and format management (typified in the dotcom era by designer suites such as Adobe ColdFusion and Microsoft FrontPge) and functionality management (which is the area occupied by IDE's [Integrated Development Environment] such as the Microsoft "Visual" series of environments).

Thus, McAuley's vision was of a single framework within which a person skilled in design (content/format) and development (functionality) could produce applications built using web-page graphics interaction (look and feel aka GUI [Graphic User Interface]).

In doing so, McAuley was the first to construct the idea of web applications

Some additional reading:

McAuley's background in Austin, Texas centered around hive-computing work typified by the University of California/Berkeley WebOS:
http://www.cs.duke.edu/ari/issg/webos/
"WebOS began at the University of California, Berkeley in 1996 as part of the Network of Worksta(t)ions project." also known as 'NOW'.

that was taken further in the University of Texas/Austin project "Beyond Browsers"
http://www.cs.utexas.edu/users/less/bb/

Network-of-Workstations or 'NOW' is the original SaaS using thin or fat client workstations to construct distributed computing resources. In such a system, the browser serves as the GUI environment.

"Beyond Browsers" allowed ALL programmed applications, including browsers, to use geographically distributed computing resources.

VCSY NOW Solutions takes the central "network is the computer" concept to a comprehensive plateau with the emPath construct gluing all applications together so they may run on their respective proprietary platforms while interoperating with web services and other networked resources.

McAuley's work resulted in the 6826744 patent which took these distributed computing concepts and rendered them in an integrated content/format/functionality framework that referenced all resources within those three domains in an arbitrary way. A natural by-product of such arbitrated virtualization is the ability to "repurpose" any part of the application from any resource to any other required use.

A search on McAuley's current work shows him in the Adobe user forum... where one would have expected to find him these days based on his long-running experience and efforts:
http://www.adobeforums.com/cgi-bin/webx?224@@2cd6967d@.3bc48a75/9

While 744 is referred to as a "WebOS" patent, the WebOS is only one construct that may be derived from the patent. The patent is a universal framework maker from which any operating system and/or application and/or code fragments may be derived into any other purpose or use.

The framework also allows for the integration of any purposed applications into the framework. Thus, 744 becomes a comprehensive ecology for concept, design, development, deployment of applications from any arbitrary resources. The ecology may thus be extended to include life-cycle maintenance, management, governance or any other discipline relating to the purposes represented by the overall or piece-wise application resources.

744 represents one of the most significant developments in application existence in history and signals a new paradigm which promises to absorb legacy software development as well as creating wholly novel and unique applications.

One of the most important characteristics of 744 is the ability to morph operating systems and applications (both may be integrated into a single package) into any subject matter ecology where the tasking and use is carried out by SME's[subject matter experts] with little to no programming skills.

Furthermore, such constructs may be further extended into vertical disciplines with similar but different applications and appliances. Thus SiteFlash becomes the parent with all derived child capabilities remaining traceable to the original product.

This capability alone teaches a wide array of available life-cycle design, development, maintenance, management, governance and property audit methods which provide a comprehensive means of value-assessment and policing unavailable by traditional methods and products.

The above barely touches the surface of the 744 paradigm which is a transcendent approach on software.

---

Jeff Davison:
Davison's work in network management by agent methods uniquely equipped him to produce an executive kernel and a programming language constructed from and using markup language (see http://en.wikipedia.org/wiki/Markup_language). The result of this construction and programming results in a universally extensible means of building virtual computer architectures.

Another useful facet of Davison's agent work means the "markup" language may be any textual system for expressing data and structure including but not limited to the more famous markup language XML. Thus, the MLE (Markup Language Executive) described in the 7076521 patent is capable of running any code of any kind. This means the 521 virtual machine is able to be applied to any platform using any code to perform the task of collecting data from any resource and processing and transporting that to any consumer.

By combining agents of various capabilities, the MLE becomes a granular component for building virtualized software and/or virtualized hardware. The 521 machine may thus become anything "computer" for any purpose and is not limited by proprietary structure thus allowing virtualization at any level for any purpose.

This small bio covers only a small part of Mr. Davison's career. The reader would do well to examine his significant background experience more thoroughly than space or time here permits:

http://www.secinfo.com/duwTa.43ar.htm from December 31, 2000
Jeff Davison, Age 45
Chief Software Officer
Mr. Davison is a Professional Engineer, certified in electrical and electronic engineering. He has more than 20 years of Internet software and product development experience. He is the author of various software products, including the popular SNMX scripting language for network management and automation, offered by Diversified Data Resources. Most recently, he has been the chief developer of Emily, VCSY's proprietary XML tool.
---

744 and 521 have complementary architectures.

The 744 patent allows for each deliverable to transform and evolve into higher forms thus providing the various constructs (of any kind with any resources) with increasing simplicity and ease.

Applying the 521 patent allows for the simulation/emulation of any software or electronic hardware construct. Pairing the 744 and 521 patents allows these constructs to evolve and repurpose via extensibility at all levels. For that reason, the potential scope of derivative computer products may number in the millions.