Tuesday, November 10, 2009

Agents exist! They are everywhere!

I was cribbing to a colleague of mine that I had to give an example of a specific kind of multiagent system, a kind that I think didn't exist. The colleague suggested that it would be impossible to do that because there is not a single multiagent system in the world, let alone a specific kind.

He couldn't be more wrong. Multiagent systems are everywhere; it is just that we build these systems without using agent-oriented concepts, so we do not see them as such. The application eBay is a multiagent system: it involves eBay (the organization), bidders, sellers, payment processors, and so on. The application Orbitz is a multiagent system: it involve customers, airlines, Orbitz (the organization), credit card companies, banks, and so on. Your home security system is a multiagent system if it includes active monitoring by a security agency. These are multiagent systems because they involve interactions between autonomous parties, in other words agents.

You and your microwave do not constitute a multiagent system: the microwave is under your control. However, it would be a multiagent system if your microwave could say "No, I don't want to defrost the chicken now". Whereas you and the security agency that installed the sensors constitute a multiagent system, neither two or more sensors by themselves nor the sensors in conjunction with you constitute one. There is no sense in which such a sensor is autonomous in relation to you nor in relation with other sensors.

Every agent is a component, but not every component is an agent. Every multiagent system is necessarily distributed, but not every distributed system is a multiagent system.

The only test of agenthood is autonomy. Something is not an agent because it has a sense, reason, act (SRA) loop. Autonomy can only be understood in relation with other agents; the SRA loop is about the internal construction of the agent. Whether a bidder on eBay is a human or "intelligent" software using an SRA loop or "stupid" software -- for example, one that blindly raises its bid every so often -- does not change the fact that they are all agents.

An agent is not an agent because it is written in JADE or WADE (agent-oriented programming languages); it is not one because it was modeled and designed using agent-oriented abstractions. Anything that is autonomous is an agent; anything that is not is not an agent, and labeling such a thing as an agent is to abuse the term. In fact, the term "autonomous agent" is redundant in itself.

As multiagent system researchers, our goal is to enable their programming in a way that accommodates autonomy.

Thursday, September 24, 2009

Beyond loose coupling: Completely decoupled agents

In the beginning, a software system was thought of as a process. The process invoked methods to carry out its intended tasks. Both object-oriented and procedural programs are essentially of this nature. The various system components were said to be tightly coupled or integrated.

This view of a software system evolved into one that involved multiple communicating processes, typically by messaging passing. This view reflected a looser coupling between processes and emphasized interoperation between processes via protocols. Traditionally, a protocol specifies the ordering and synchronization constraints upon the sending and receiving of messages by each process.

It is time for another sea change in software systems modeling. Instead of talking in terms of processes, we need to talk in terms of autonomous agents engaging each other. This is only essential given that many of today's Internet applications are services provided by autonomous organizations, and typically involve multiple such organizations. An agent has a legal identity like you and me; properties such as trust, reputation, and responsibility can be attributed to agents. By contrast, a process has no legal identity, and it makes no sense to talk about it in terms of the above properties.

The word 'autonomous' has been much used and abused, and tends to invoke skeptical looks from many and even derision from some. Others conflate it with autonomic. I mean autonomous in the specific sense that no agent has control over another. Autonomy for me is strictly an interagent relation, not between agents and their developers, and it most certainly has nothing to do with intelligent behavior.

Traditional protocols are all about control; therefore, they are not suitable to realize engagements. Engaging means interacting meaningfully. There is nothing meaningful about traditional protocols. The difference between TCP and an e-business protocol such RosettaNet is only in their functionality; not in the nature of their specifications.

To be able to engage, agents have to be coupled only to extent required to interact in a semantically correct manner and no more. Semantic means in general -- for all applications, whether it be banking, supply chain, selling books or anything else. Semantic means at the level of the application.

Commitments yield exactly such a notion of semantic correctness: an agent is interacting correctly as long as it satisfies its commitments. What do we care then whether goods follow payment or payment goods! It's all the same as long as the involved commitments are satisfied!

With commitment-based protocols, agents would be only be nominally coupled, for they would be free to act as they please. An agent could send any message anytime. It would even have the choice of violating its commitments (typically though, there would be penalties for doing that). I prefer to call such agents completely decoupled agents, for they are no more coupled than is absolutely necessary.

Wednesday, July 22, 2009

A Memo on Academic Research

These are tough times for academic research in Computer Science. Multi-institution research, technology transfer and exploitation in "real-world" -- the term has come to mean nothing to me and its usage makes me cringe even -- scenarios is more commonly a prerequisite to funding. It is simply not enough to publish in peer-reviewed venues. An intellectual proposition is considered to be of little merit unless accompanied by deliverables -- I don't know the history of this term, but I suspects its origin in industry -- in the form of software, hardware, documentation, and demos.

What are my primary responsibilities to society as a researcher in Computer Science? To apply my mind to the challenges people face and to strive for semantic solutions, not ad hoc ones; to disseminate the results of my efforts; and to mentor students.

These responsibilities conflict with most of the things that are currently passing in academia. For example, applying my intellect pretty much rules out multi-institution research because of the bureaucratic commitments such an activity entails and the reduction of any intellectual proposition to the lowest common denominator. Coordinating two institutions fruitfully is hard enough; coordinating a consortium of 15, each of which brings multiple research groups, is an exercise in futility. Coordination should not be confused with dissemination. Disseminating is like spreading spores -- the spores might take root in fertile ground and germinate into something wonderful. By contrast, coordination is like a marriage of persons who speak completely different languages. Most cultures hold the marriage of minds as an ideal; why in academia then have we created conditions that encourage institutions to enter into a marriage of convenience?

Some of the things I mentioned above, such as cooperation and technology transfer, are not bad in of themselves. What is bad is making them integral to research proposals. Research is risky: one might not achieve the results one had hoped to, and it typically takes many years for a line of research to mature. Such being the case, how can every researcher promise a transfer of technology to industry? I would view with suspicion anyone who promises that: either they are being dishonest or there is little research in their proposal. It would be a thoroughly good idea to have separate funding for transfer of technology; that's precisely the role of incubators. (Some universities indeed have incubators set aside for this purpose.)

Clearly, we are accountable towards those who fund our research, whether it be the taxpayer or a private entity. But how may we be judged? By peer review -- by more or less the same criteria that are used to judge a PhD dissertation. The number of quality publications, citations, technology transfer, the numbers of PhD students graduated, software produced, demos, could all be factors in the evaluation. However, the biggest part of judging will continue to be "did the research lead to significant new results and insights"? The judges could include some of the people who approved the proposal, with the others replaced by independent experts in the field -- it's like being called to do jury duty.

My intention is not to paint the software industry in a malign light. However, we must accept that industry and academia simply have different motivations, different objectives and timescales for achieving those, and different standards of judging achievement. It is best to leave them both to their own devices.

Friday, July 17, 2009

Services and Components, Architecture and Design

I heard a couple of things recently. One, that design subsumes architecture: when designing a system, you design the components and the interconnections between them (the interconnections representing the architecture). Two, that services are merely components, thus implying that SOA is old wine in a new bottle.

A service is a component in the broad sense that it may be independently designed and packaged, and its computational usage is via a published interface. Thus, just as for components, it makes sense to talk about service composition, substitution, interoperability, and so on. However, the engineering of services is a world apart from that of components; engineering services requires fundamentally different abstractions. The differences arise from their pragmatic aspects. Components have no stakeholders; services do. Components have no autonomy; services (via their stakeholders) do. Components do not interact (rather they invoke each other); services interact. There is no question of a component being compliant with respect to other components; the question is fundamental for services given their autonomy. It makes no sense to sanction a component; the risk of sanctions helps keep services compliant.

Moreover, in today's world of services, more so than ever, it pays immensely to treat architecture as an independent artifact of engineering. A service is not a system in the traditional sense; it is simply a participant in an open system (such as the Web, or to be more specific, Amazon marketplace). A service's architecture is a description of the service's interactions with other services, all of which may serve the interests of independent stakeholders. At a high-level, architecture entails the commitments that a service could be party to, the contextual regulations that are binding upon it (such as HIPAA or Amazon marketplace policies), the monitoring of its compliance, and the sanctions that it may face in case of noncompliance. Where only one stakeholder is concerned with a system in its entirety, such a normative view of affairs is of little value. But when multiple stakeholders are concerned, as is the case in any services application, each stakeholder would want to make sure that the architecture accurately reflects a normative stance that is compatible with his requirements.